Test Report: Docker_macOS 18647

                    
                      cbf61390ee716906db88190ad6530e4e486e1432:2024-04-15:34045
                    
                

Test fail (22/216)

x
+
TestOffline (758s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m37.096328739s)

                                                
                                                
-- stdout --
	* [offline-docker-189000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-189000" primary control-plane node in "offline-docker-189000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-189000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 18:03:51.167515   10505 out.go:291] Setting OutFile to fd 1 ...
	I0415 18:03:51.167793   10505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:03:51.167798   10505 out.go:304] Setting ErrFile to fd 2...
	I0415 18:03:51.167801   10505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:03:51.167977   10505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 18:03:51.169453   10505 out.go:298] Setting JSON to false
	I0415 18:03:51.192397   10505 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5602,"bootTime":1713223829,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 18:03:51.192502   10505 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:03:51.214249   10505 out.go:177] * [offline-docker-189000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 18:03:51.256261   10505 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 18:03:51.256269   10505 notify.go:220] Checking for updates...
	I0415 18:03:51.298151   10505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 18:03:51.319174   10505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 18:03:51.340071   10505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:03:51.361180   10505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 18:03:51.382169   10505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:03:51.403313   10505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:03:51.457732   10505 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 18:03:51.457919   10505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:03:51.650745   10505 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-16 01:03:51.601221947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:03:51.672058   10505 out.go:177] * Using the docker driver based on user configuration
	I0415 18:03:51.714027   10505 start.go:297] selected driver: docker
	I0415 18:03:51.714052   10505 start.go:901] validating driver "docker" against <nil>
	I0415 18:03:51.714069   10505 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:03:51.718000   10505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:03:51.829182   10505 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-16 01:03:51.818265607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:03:51.829398   10505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:03:51.829581   10505 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:03:51.851124   10505 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 18:03:51.873144   10505 cni.go:84] Creating CNI manager for ""
	I0415 18:03:51.873191   10505 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:03:51.873203   10505 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 18:03:51.873325   10505 start.go:340] cluster config:
	{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:03:51.895174   10505 out.go:177] * Starting "offline-docker-189000" primary control-plane node in "offline-docker-189000" cluster
	I0415 18:03:51.958248   10505 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 18:03:52.021037   10505 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 18:03:52.041983   10505 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:03:52.042059   10505 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:03:52.042078   10505 cache.go:56] Caching tarball of preloaded images
	I0415 18:03:52.042074   10505 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 18:03:52.042331   10505 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:03:52.042352   10505 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:03:52.043858   10505 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/offline-docker-189000/config.json ...
	I0415 18:03:52.043986   10505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/offline-docker-189000/config.json: {Name:mk2e4fb6839e609acea6328ea7c3ea5d98f399e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:03:52.091885   10505 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 18:03:52.091922   10505 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 18:03:52.091952   10505 cache.go:194] Successfully downloaded all kic artifacts
	I0415 18:03:52.092114   10505 start.go:360] acquireMachinesLock for offline-docker-189000: {Name:mk00d8ad03e0f748641f1e6d71a49db2b10e9eb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:03:52.092283   10505 start.go:364] duration metric: took 155.553µs to acquireMachinesLock for "offline-docker-189000"
	I0415 18:03:52.092309   10505 start.go:93] Provisioning new machine with config: &{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:03:52.092418   10505 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:03:52.134799   10505 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:03:52.135000   10505 start.go:159] libmachine.API.Create for "offline-docker-189000" (driver="docker")
	I0415 18:03:52.135023   10505 client.go:168] LocalClient.Create starting
	I0415 18:03:52.135132   10505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:03:52.135182   10505 main.go:141] libmachine: Decoding PEM data...
	I0415 18:03:52.135199   10505 main.go:141] libmachine: Parsing certificate...
	I0415 18:03:52.135287   10505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:03:52.135326   10505 main.go:141] libmachine: Decoding PEM data...
	I0415 18:03:52.135333   10505 main.go:141] libmachine: Parsing certificate...
	I0415 18:03:52.135869   10505 cli_runner.go:164] Run: docker network inspect offline-docker-189000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:03:52.251175   10505 cli_runner.go:211] docker network inspect offline-docker-189000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:03:52.251316   10505 network_create.go:281] running [docker network inspect offline-docker-189000] to gather additional debugging logs...
	I0415 18:03:52.251337   10505 cli_runner.go:164] Run: docker network inspect offline-docker-189000
	W0415 18:03:52.301592   10505 cli_runner.go:211] docker network inspect offline-docker-189000 returned with exit code 1
	I0415 18:03:52.301626   10505 network_create.go:284] error running [docker network inspect offline-docker-189000]: docker network inspect offline-docker-189000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-189000 not found
	I0415 18:03:52.301640   10505 network_create.go:286] output of [docker network inspect offline-docker-189000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-189000 not found
	
	** /stderr **
	I0415 18:03:52.301785   10505 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:03:52.405281   10505 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:03:52.406932   10505 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:03:52.407281   10505 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00219f000}
	I0415 18:03:52.407295   10505 network_create.go:124] attempt to create docker network offline-docker-189000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 18:03:52.407354   10505 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-189000 offline-docker-189000
	I0415 18:03:52.494504   10505 network_create.go:108] docker network offline-docker-189000 192.168.67.0/24 created
	I0415 18:03:52.494544   10505 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-189000" container
	I0415 18:03:52.494658   10505 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:03:52.545646   10505 cli_runner.go:164] Run: docker volume create offline-docker-189000 --label name.minikube.sigs.k8s.io=offline-docker-189000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:03:52.595690   10505 oci.go:103] Successfully created a docker volume offline-docker-189000
	I0415 18:03:52.595793   10505 cli_runner.go:164] Run: docker run --rm --name offline-docker-189000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-189000 --entrypoint /usr/bin/test -v offline-docker-189000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:03:53.076654   10505 oci.go:107] Successfully prepared a docker volume offline-docker-189000
	I0415 18:03:53.076696   10505 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:03:53.076710   10505 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:03:53.076800   10505 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-189000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:09:52.136644   10505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:09:52.136789   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:52.191486   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:09:52.191598   10505 retry.go:31] will retry after 258.927647ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:52.452900   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:52.503806   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:09:52.503923   10505 retry.go:31] will retry after 313.564079ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:52.819862   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:52.873435   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:09:52.873534   10505 retry.go:31] will retry after 823.993824ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:53.699886   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:53.753874   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	W0415 18:09:53.753991   10505 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	
	W0415 18:09:53.754018   10505 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:53.754072   10505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:09:53.754141   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:53.803833   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:09:53.803923   10505 retry.go:31] will retry after 339.485149ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:54.143731   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:54.193672   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:09:54.193775   10505 retry.go:31] will retry after 537.291738ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:54.733475   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:54.784208   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:09:54.784310   10505 retry.go:31] will retry after 630.16086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:55.416882   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:09:55.472232   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	W0415 18:09:55.472340   10505 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	
	W0415 18:09:55.472362   10505 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:09:55.472378   10505 start.go:128] duration metric: took 6m3.380516444s to createHost
	I0415 18:09:55.472384   10505 start.go:83] releasing machines lock for "offline-docker-189000", held for 6m3.380660697s
	W0415 18:09:55.472399   10505 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 18:09:55.472832   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:09:55.521052   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:09:55.521115   10505 delete.go:82] Unable to get host status for offline-docker-189000, assuming it has already been deleted: state: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	W0415 18:09:55.521199   10505 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 18:09:55.521209   10505 start.go:728] Will try again in 5 seconds ...
	I0415 18:10:00.521533   10505 start.go:360] acquireMachinesLock for offline-docker-189000: {Name:mk00d8ad03e0f748641f1e6d71a49db2b10e9eb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:10:00.521796   10505 start.go:364] duration metric: took 219.019µs to acquireMachinesLock for "offline-docker-189000"
	I0415 18:10:00.521839   10505 start.go:96] Skipping create...Using existing machine configuration
	I0415 18:10:00.521855   10505 fix.go:54] fixHost starting: 
	I0415 18:10:00.522276   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:00.574778   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:00.574825   10505 fix.go:112] recreateIfNeeded on offline-docker-189000: state= err=unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:00.574844   10505 fix.go:117] machineExists: false. err=machine does not exist
	I0415 18:10:00.595180   10505 out.go:177] * docker "offline-docker-189000" container is missing, will recreate.
	I0415 18:10:00.637136   10505 delete.go:124] DEMOLISHING offline-docker-189000 ...
	I0415 18:10:00.637345   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:00.686570   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	W0415 18:10:00.686626   10505 stop.go:83] unable to get state: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:00.686641   10505 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:00.687005   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:00.734874   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:00.734934   10505 delete.go:82] Unable to get host status for offline-docker-189000, assuming it has already been deleted: state: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:00.735026   10505 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-189000
	W0415 18:10:00.783659   10505 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-189000 returned with exit code 1
	I0415 18:10:00.783694   10505 kic.go:371] could not find the container offline-docker-189000 to remove it. will try anyways
	I0415 18:10:00.783770   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:00.832046   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	W0415 18:10:00.832099   10505 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:00.832176   10505 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-189000 /bin/bash -c "sudo init 0"
	W0415 18:10:00.880054   10505 cli_runner.go:211] docker exec --privileged -t offline-docker-189000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 18:10:00.880096   10505 oci.go:650] error shutdown offline-docker-189000: docker exec --privileged -t offline-docker-189000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:01.882529   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:01.936296   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:01.936343   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:01.936359   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:01.936385   10505 retry.go:31] will retry after 507.05696ms: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:02.444330   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:02.495343   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:02.495394   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:02.495409   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:02.495439   10505 retry.go:31] will retry after 819.604746ms: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:03.316021   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:03.368415   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:03.368465   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:03.368478   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:03.368502   10505 retry.go:31] will retry after 1.277392202s: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:04.647454   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:04.699943   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:04.699990   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:04.700001   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:04.700027   10505 retry.go:31] will retry after 1.668415661s: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:06.370808   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:06.421882   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:06.421926   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:06.421938   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:06.421961   10505 retry.go:31] will retry after 1.836973123s: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:08.261277   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:08.313480   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:08.313538   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:08.313548   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:08.313570   10505 retry.go:31] will retry after 4.15169398s: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:12.466337   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:12.518850   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:12.518897   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:12.518907   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:12.518927   10505 retry.go:31] will retry after 7.949316486s: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:20.470577   10505 cli_runner.go:164] Run: docker container inspect offline-docker-189000 --format={{.State.Status}}
	W0415 18:10:20.522447   10505 cli_runner.go:211] docker container inspect offline-docker-189000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:20.522496   10505 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:10:20.522506   10505 oci.go:664] temporary error: container offline-docker-189000 status is  but expect it to be exited
	I0415 18:10:20.522539   10505 oci.go:88] couldn't shut down offline-docker-189000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	 
	I0415 18:10:20.522616   10505 cli_runner.go:164] Run: docker rm -f -v offline-docker-189000
	I0415 18:10:20.572971   10505 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-189000
	W0415 18:10:20.621465   10505 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-189000 returned with exit code 1
	I0415 18:10:20.621578   10505 cli_runner.go:164] Run: docker network inspect offline-docker-189000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:10:20.670197   10505 cli_runner.go:164] Run: docker network rm offline-docker-189000
	I0415 18:10:20.777054   10505 fix.go:124] Sleeping 1 second for extra luck!
	I0415 18:10:21.779224   10505 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:10:21.801165   10505 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:10:21.801340   10505 start.go:159] libmachine.API.Create for "offline-docker-189000" (driver="docker")
	I0415 18:10:21.801371   10505 client.go:168] LocalClient.Create starting
	I0415 18:10:21.801582   10505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:10:21.801683   10505 main.go:141] libmachine: Decoding PEM data...
	I0415 18:10:21.801711   10505 main.go:141] libmachine: Parsing certificate...
	I0415 18:10:21.801786   10505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:10:21.801861   10505 main.go:141] libmachine: Decoding PEM data...
	I0415 18:10:21.801878   10505 main.go:141] libmachine: Parsing certificate...
	I0415 18:10:21.802955   10505 cli_runner.go:164] Run: docker network inspect offline-docker-189000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:10:21.856081   10505 cli_runner.go:211] docker network inspect offline-docker-189000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:10:21.856188   10505 network_create.go:281] running [docker network inspect offline-docker-189000] to gather additional debugging logs...
	I0415 18:10:21.856206   10505 cli_runner.go:164] Run: docker network inspect offline-docker-189000
	W0415 18:10:21.905127   10505 cli_runner.go:211] docker network inspect offline-docker-189000 returned with exit code 1
	I0415 18:10:21.905157   10505 network_create.go:284] error running [docker network inspect offline-docker-189000]: docker network inspect offline-docker-189000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-189000 not found
	I0415 18:10:21.905182   10505 network_create.go:286] output of [docker network inspect offline-docker-189000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-189000 not found
	
	** /stderr **
	I0415 18:10:21.905328   10505 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:10:21.955427   10505 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:10:21.956747   10505 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:10:21.958102   10505 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:10:21.959678   10505 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:10:21.961282   10505 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:10:21.961703   10505 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00226bae0}
	I0415 18:10:21.961717   10505 network_create.go:124] attempt to create docker network offline-docker-189000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0415 18:10:21.961795   10505 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-189000 offline-docker-189000
	I0415 18:10:22.046154   10505 network_create.go:108] docker network offline-docker-189000 192.168.94.0/24 created
	I0415 18:10:22.046195   10505 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-189000" container
	I0415 18:10:22.046308   10505 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:10:22.096447   10505 cli_runner.go:164] Run: docker volume create offline-docker-189000 --label name.minikube.sigs.k8s.io=offline-docker-189000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:10:22.144236   10505 oci.go:103] Successfully created a docker volume offline-docker-189000
	I0415 18:10:22.144347   10505 cli_runner.go:164] Run: docker run --rm --name offline-docker-189000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-189000 --entrypoint /usr/bin/test -v offline-docker-189000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:10:22.395044   10505 oci.go:107] Successfully prepared a docker volume offline-docker-189000
	I0415 18:10:22.395074   10505 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:10:22.395088   10505 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:10:22.395184   10505 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-189000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:16:21.802646   10505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:16:21.802771   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:21.855843   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:21.855964   10505 retry.go:31] will retry after 198.820571ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:22.055449   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:22.105976   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:22.106097   10505 retry.go:31] will retry after 498.029945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:22.605124   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:22.657260   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:22.657355   10505 retry.go:31] will retry after 730.928538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:23.388694   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:23.442021   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	W0415 18:16:23.442144   10505 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	
	W0415 18:16:23.442166   10505 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:23.442223   10505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:16:23.442275   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:23.492042   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:23.492144   10505 retry.go:31] will retry after 251.363359ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:23.744400   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:23.795123   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:23.795231   10505 retry.go:31] will retry after 246.687558ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:24.044339   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:24.094199   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:24.094296   10505 retry.go:31] will retry after 602.297509ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:24.697765   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:24.749979   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	W0415 18:16:24.750084   10505 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	
	W0415 18:16:24.750102   10505 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:24.750113   10505 start.go:128] duration metric: took 6m2.971409975s to createHost
	I0415 18:16:24.750178   10505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:16:24.750230   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:24.800295   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:24.800388   10505 retry.go:31] will retry after 148.997809ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:24.950384   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:24.999284   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:24.999380   10505 retry.go:31] will retry after 258.118544ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:25.257909   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:25.308813   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:25.308906   10505 retry.go:31] will retry after 644.348494ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:25.954656   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:26.004696   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:26.004800   10505 retry.go:31] will retry after 429.810884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:26.436141   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:26.486415   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	W0415 18:16:26.486545   10505 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	
	W0415 18:16:26.486570   10505 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:26.486628   10505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:16:26.486701   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:26.534409   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:26.534497   10505 retry.go:31] will retry after 324.868136ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:26.861744   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:26.912541   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:26.912649   10505 retry.go:31] will retry after 237.758536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:27.152781   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:27.202916   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	I0415 18:16:27.203013   10505 retry.go:31] will retry after 771.993923ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:27.977395   10505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000
	W0415 18:16:28.030626   10505 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000 returned with exit code 1
	W0415 18:16:28.030723   10505 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	
	W0415 18:16:28.030738   10505 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-189000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-189000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000
	I0415 18:16:28.030750   10505 fix.go:56] duration metric: took 6m27.509503418s for fixHost
	I0415 18:16:28.030756   10505 start.go:83] releasing machines lock for "offline-docker-189000", held for 6m27.509551574s
	W0415 18:16:28.030852   10505 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-189000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-189000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 18:16:28.075199   10505 out.go:177] 
	W0415 18:16:28.096507   10505 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 18:16:28.096565   10505 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 18:16:28.096590   10505 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 18:16:28.118445   10505 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-15 18:16:28.213642 -0700 PDT m=+5984.705418511
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-189000
helpers_test.go:235: (dbg) docker inspect offline-docker-189000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-189000",
	        "Id": "0ded871e78557de57100a2911b86f33c6942c3587bf9cfcbeaaaf51c7143ade8",
	        "Created": "2024-04-16T01:10:22.006711085Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-189000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-189000 -n offline-docker-189000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-189000 -n offline-docker-189000: exit status 7 (112.115623ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 18:16:28.377254   11461 status.go:249] status error: host: state: unknown state "offline-docker-189000": docker container inspect offline-docker-189000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-189000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-189000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-189000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-189000
--- FAIL: TestOffline (758.00s)

                                                
                                    
x
+
TestCertOptions (7201.352s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-447000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0415 18:29:58.144420    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 18:30:05.028881    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:30:15.093940    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 18:35:05.026931    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:35:15.090630    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (7m37s)
	TestCertOptions (6m53s)
	TestNetworkPlugins (32m52s)

                                                
                                                
goroutine 2509 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000ba340, 0xc0009f1bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000010810, {0x1375cf20, 0x2a, 0x2a}, {0xf40dbc5?, 0x10e9e5a8?, 0x1377f2c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000692460)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000692460)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000636b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2494 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5b0c44b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002c28de0?, 0xc0024e8463?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002c28de0, {0xc0024e8463, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090a658, {0xc0024e8463?, 0x5ad11668?, 0x63?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002ca6960, {0x12455788, 0xc000b88600})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x124558c8, 0xc002ca6960}, {0x12455788, 0xc000b88600}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000092678?, {0x124558c8, 0xc002ca6960})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x1371f190?, {0x124558c8?, 0xc002ca6960?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x124558c8, 0xc002ca6960}, {0x12455848, 0xc00090a658}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000066de0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 591
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 39 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 38
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 590 [syscall, 6 minutes]:
syscall.syscall6(0xc002ca7f80?, 0x1000000000010?, 0x10000000019?, 0x5ab167e0?, 0x90?, 0x1405d5b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0020b18a0?, 0xf34e165?, 0x90?, 0x123ba960?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xf47ef05?, 0xc0020b18d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002b283c0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002712580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002712580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0020ceea0, 0xc002712580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0020ceea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0020ceea0, 0x1244ab78)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2493 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5b0c3ee0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002c28d20?, 0xc0024ff31c?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002c28d20, {0xc0024ff31c, 0x4e4, 0x4e4})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090a5f8, {0xc0024ff31c?, 0x5af5ffe8?, 0x233?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002ca6930, {0x12455788, 0xc000b885f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x124558c8, 0xc002ca6930}, {0x12455788, 0xc000b885f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000091e78?, {0x124558c8, 0xc002ca6930})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x1371f190?, {0x124558c8?, 0xc002ca6930?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x124558c8, 0xc002ca6930}, {0x12455848, 0xc00090a5f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002c10120?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 591
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 170 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x124796f0, 0xc0009be060}, 0xc00214a750, 0xc002441f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x124796f0, 0xc0009be060}, 0x0?, 0xc00214a750, 0xc00214a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x124796f0?, 0xc0009be060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 191
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 169 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000ad4cd0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x11f65060?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022dca20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000ad4d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000159720, {0x12456d80, 0xc002157590}, 0x1, 0xc0009be060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000159720, 0x3b9aca00, 0x0, 0x1, 0xc0009be060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 191
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 591 [syscall, 7 minutes]:
syscall.syscall6(0xc002ca7f80?, 0x1000000000010?, 0x10000000019?, 0x5ab167e0?, 0x90?, 0x1405d5b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0008a3a40?, 0xf34e165?, 0x90?, 0x123ba960?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xf47ef05?, 0xc0008a3a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002b284e0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0027126e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0027126e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0020cf040, 0xc0027126e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0020cf040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc0020cf040, 0x1244ab70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2182 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00273c000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00273c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00273c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc00273c000, 0x1244ac20)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 665 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x5b0c40d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000915b00?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000915b00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000915b00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000a06620)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000a06620)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007da690, {0x1246d080, 0xc000a06620})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007da690)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00245d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 662
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 190 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022dcb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 191 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000ad4d00, 0xc0009be060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 171 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 170
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 874 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x124796f0, 0xc0009be060}, 0xc002149f50, 0xc000ad7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x124796f0, 0xc0009be060}, 0x0?, 0xc002149f50, 0xc002149f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x124796f0?, 0xc0009be060?}, 0xc0022689c0?, 0xf481bc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002149fd0?, 0xf4c7ec4?, 0xc0028f2300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 883
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2194 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000ba1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000ba1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000ba1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000ba1a0, 0xc0009ae080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1781 [syscall, 98 minutes]:
syscall.syscall(0x0?, 0xc00276eea0?, 0xc002149ef0?, 0xf3ee05d?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc00276ed80?, 0xc000501a40?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1760
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 873 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007790d0, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x11f65060?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021fa9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000779140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a6cd50, {0x12456d80, 0xc002156540}, 0x1, 0xc0009be060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a6cd50, 0x3b9aca00, 0x0, 0x1, 0xc0009be060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 883
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 875 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 997 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027134a0, 0xc002656d80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 996
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2200 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bb1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bb1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000bb1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000bb1e0, 0xc0009ae400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2198 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000baea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000baea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000baea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000baea0, 0xc0009ae300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2181 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020cfd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020cfd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0020cfd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0020cfd40, 0x1244aca8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 882 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021faae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 795
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2171 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020cf6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020cf6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0020cf6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0020cf6c0, 0x1244aca0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2183 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00273c1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00273c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc00273c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc00273c1a0, 0x1244ac38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2109 [chan receive, 33 minutes]:
testing.(*T).Run(0xc0020ce1a0, {0x10e461c9?, 0x42f71283458?}, 0xc00220a048)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0020ce1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0020ce1a0, 0x1244ac58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 883 [chan receive, 113 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000779140, 0xc0009be060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 795
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2110 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020ce680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020ce680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0020ce680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0020ce680, 0x1244ac60)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2197 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000bab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000bab60, 0xc0009ae280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2495 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027126e0, 0xc000067260)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 591
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1163 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023edce0, 0xc002503d40)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1162
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2196 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000ba820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000ba820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000ba820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000ba820, 0xc0009ae200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2507 [IO wait]:
internal/poll.runtime_pollWait(0x5b15fe98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002c28900?, 0xc0024e8c63?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002c28900, {0xc0024e8c63, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b887a8, {0xc0024e8c63?, 0x9?, 0x63?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002ca66f0, {0x12455788, 0xc00090a1e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x124558c8, 0xc002ca66f0}, {0x12455788, 0xc00090a1e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x124558c8, 0xc002ca66f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x1371f190?, {0x124558c8?, 0xc002ca66f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x124558c8, 0xc002ca66f0}, {0x12455848, 0xc000b887a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000201980?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 590
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2508 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc002712580, 0xc0000670e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 590
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2201 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bb380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bb380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000bb380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000bb380, 0xc0009ae480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1275 [select, 111 minutes]:
net/http.(*persistConn).writeLoop(0xc002501560)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1255
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1203 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0024cd4a0, 0xc0024d8780)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1202
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2202 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bb520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000bb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000bb520, 0xc0009ae500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2180 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020ce000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020ce000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0020ce000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0020ce000, 0x1244ac80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2111 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020ce820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020ce820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0020ce820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0020ce820, 0x1244ac70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2195 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000ba4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000ba4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000ba4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000ba4e0, 0xc0009ae180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2199 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0006e10e0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bb040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bb040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000bb040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000bb040, 0xc0009ae380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1274 [select, 111 minutes]:
net/http.(*persistConn).readLoop(0xc002501560)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1255
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2506 [IO wait]:
internal/poll.runtime_pollWait(0x5b0c41c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002c28840?, 0xc002190b10?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002c28840, {0xc002190b10, 0x4f0, 0x4f0})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b88770, {0xc002190b10?, 0xc0024db500?, 0x22a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002ca66c0, {0x12455788, 0xc00090a1c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x124558c8, 0xc002ca66c0}, {0x12455788, 0xc00090a1c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000112e78?, {0x124558c8, 0xc002ca66c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x1371f190?, {0x124558c8?, 0xc002ca66c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x124558c8, 0xc002ca66c0}, {0x12455848, 0xc000b88770}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002a1a720?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 590
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1273 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026129a0, 0xc0024d9620)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 766
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2193 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000ba000, 0xc00220a048)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2109
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestDockerFlags (756.94s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-058000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0415 18:20:04.960711    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:20:15.024795    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 18:24:48.148595    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:25:05.032358    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:25:15.097625    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-058000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.649341157s)

                                                
                                                
-- stdout --
	* [docker-flags-058000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-058000" primary control-plane node in "docker-flags-058000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-058000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 18:17:13.370766   11634 out.go:291] Setting OutFile to fd 1 ...
	I0415 18:17:13.371034   11634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:17:13.371040   11634 out.go:304] Setting ErrFile to fd 2...
	I0415 18:17:13.371044   11634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:17:13.371226   11634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 18:17:13.372847   11634 out.go:298] Setting JSON to false
	I0415 18:17:13.396069   11634 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6404,"bootTime":1713223829,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 18:17:13.396155   11634 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:17:13.418284   11634 out.go:177] * [docker-flags-058000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 18:17:13.461162   11634 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 18:17:13.461215   11634 notify.go:220] Checking for updates...
	I0415 18:17:13.483085   11634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 18:17:13.503964   11634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 18:17:13.525179   11634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:17:13.546101   11634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 18:17:13.566967   11634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:17:13.588888   11634 config.go:182] Loaded profile config "force-systemd-flag-313000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:17:13.589057   11634 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:17:13.644078   11634 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 18:17:13.644247   11634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:17:13.752620   11634 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-16 01:17:13.74051445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:17:13.774651   11634 out.go:177] * Using the docker driver based on user configuration
	I0415 18:17:13.796153   11634 start.go:297] selected driver: docker
	I0415 18:17:13.796187   11634 start.go:901] validating driver "docker" against <nil>
	I0415 18:17:13.796207   11634 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:17:13.800576   11634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:17:13.905886   11634 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-16 01:17:13.893754676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:17:13.906129   11634 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:17:13.906331   11634 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0415 18:17:13.927460   11634 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 18:17:13.949397   11634 cni.go:84] Creating CNI manager for ""
	I0415 18:17:13.949440   11634 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:17:13.949453   11634 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 18:17:13.949579   11634 start.go:340] cluster config:
	{Name:docker-flags-058000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:17:13.971109   11634 out.go:177] * Starting "docker-flags-058000" primary control-plane node in "docker-flags-058000" cluster
	I0415 18:17:14.013404   11634 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 18:17:14.036192   11634 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 18:17:14.078432   11634 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:17:14.078470   11634 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 18:17:14.078522   11634 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:17:14.078554   11634 cache.go:56] Caching tarball of preloaded images
	I0415 18:17:14.078813   11634 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:17:14.078829   11634 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:17:14.079810   11634 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/docker-flags-058000/config.json ...
	I0415 18:17:14.079971   11634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/docker-flags-058000/config.json: {Name:mkdcc96a68dd6162572a78bd4fd4660191413d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:17:14.204400   11634 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 18:17:14.204477   11634 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 18:17:14.204521   11634 cache.go:194] Successfully downloaded all kic artifacts
	I0415 18:17:14.204580   11634 start.go:360] acquireMachinesLock for docker-flags-058000: {Name:mk649406b197f489046529acd32e3c76ffad3226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:17:14.204802   11634 start.go:364] duration metric: took 204.926µs to acquireMachinesLock for "docker-flags-058000"
	I0415 18:17:14.204847   11634 start.go:93] Provisioning new machine with config: &{Name:docker-flags-058000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-058000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:17:14.204960   11634 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:17:14.227369   11634 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:17:14.227727   11634 start.go:159] libmachine.API.Create for "docker-flags-058000" (driver="docker")
	I0415 18:17:14.227771   11634 client.go:168] LocalClient.Create starting
	I0415 18:17:14.227951   11634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:17:14.228062   11634 main.go:141] libmachine: Decoding PEM data...
	I0415 18:17:14.228094   11634 main.go:141] libmachine: Parsing certificate...
	I0415 18:17:14.228181   11634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:17:14.228256   11634 main.go:141] libmachine: Decoding PEM data...
	I0415 18:17:14.228273   11634 main.go:141] libmachine: Parsing certificate...
	I0415 18:17:14.229171   11634 cli_runner.go:164] Run: docker network inspect docker-flags-058000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:17:14.278850   11634 cli_runner.go:211] docker network inspect docker-flags-058000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:17:14.278950   11634 network_create.go:281] running [docker network inspect docker-flags-058000] to gather additional debugging logs...
	I0415 18:17:14.278968   11634 cli_runner.go:164] Run: docker network inspect docker-flags-058000
	W0415 18:17:14.326107   11634 cli_runner.go:211] docker network inspect docker-flags-058000 returned with exit code 1
	I0415 18:17:14.326135   11634 network_create.go:284] error running [docker network inspect docker-flags-058000]: docker network inspect docker-flags-058000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-058000 not found
	I0415 18:17:14.326150   11634 network_create.go:286] output of [docker network inspect docker-flags-058000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-058000 not found
	
	** /stderr **
	I0415 18:17:14.326291   11634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:17:14.375604   11634 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:17:14.377243   11634 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:17:14.378822   11634 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:17:14.379154   11634 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a59d60}
	I0415 18:17:14.379170   11634 network_create.go:124] attempt to create docker network docker-flags-058000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 18:17:14.379249   11634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-058000 docker-flags-058000
	W0415 18:17:14.427895   11634 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-058000 docker-flags-058000 returned with exit code 1
	W0415 18:17:14.427930   11634 network_create.go:149] failed to create docker network docker-flags-058000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-058000 docker-flags-058000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 18:17:14.427951   11634 network_create.go:116] failed to create docker network docker-flags-058000 192.168.76.0/24, will retry: subnet is taken
	I0415 18:17:14.429311   11634 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:17:14.429666   11634 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00227ace0}
	I0415 18:17:14.429685   11634 network_create.go:124] attempt to create docker network docker-flags-058000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 18:17:14.429749   11634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-058000 docker-flags-058000
	I0415 18:17:14.514493   11634 network_create.go:108] docker network docker-flags-058000 192.168.85.0/24 created
	I0415 18:17:14.514528   11634 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-058000" container
	I0415 18:17:14.514639   11634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:17:14.565145   11634 cli_runner.go:164] Run: docker volume create docker-flags-058000 --label name.minikube.sigs.k8s.io=docker-flags-058000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:17:14.613950   11634 oci.go:103] Successfully created a docker volume docker-flags-058000
	I0415 18:17:14.614071   11634 cli_runner.go:164] Run: docker run --rm --name docker-flags-058000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-058000 --entrypoint /usr/bin/test -v docker-flags-058000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:17:14.935019   11634 oci.go:107] Successfully prepared a docker volume docker-flags-058000
	I0415 18:17:14.935060   11634 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:17:14.935076   11634 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:17:14.935174   11634 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-058000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:23:14.300866   11634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:23:14.301003   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:14.350762   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:14.350873   11634 retry.go:31] will retry after 263.686752ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:14.616935   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:14.670652   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:14.670758   11634 retry.go:31] will retry after 445.839675ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:15.118991   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:15.169914   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:15.170014   11634 retry.go:31] will retry after 704.882619ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:15.876730   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:15.926931   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	W0415 18:23:15.927043   11634 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	
	W0415 18:23:15.927065   11634 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:15.927124   11634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:23:15.927176   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:15.975785   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:15.975875   11634 retry.go:31] will retry after 178.416277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:16.156656   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:16.209068   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:16.209158   11634 retry.go:31] will retry after 544.553108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:16.756074   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:16.809269   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:16.809367   11634 retry.go:31] will retry after 480.337119ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:17.291727   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:17.342686   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:23:17.342782   11634 retry.go:31] will retry after 536.982718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:17.882191   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:23:17.935538   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	W0415 18:23:17.935630   11634 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	
	W0415 18:23:17.935651   11634 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:17.935669   11634 start.go:128] duration metric: took 6m3.658337828s to createHost
	I0415 18:23:17.935676   11634 start.go:83] releasing machines lock for "docker-flags-058000", held for 6m3.658505588s
	W0415 18:23:17.935692   11634 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 18:23:17.936137   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:17.984093   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:17.984153   11634 delete.go:82] Unable to get host status for docker-flags-058000, assuming it has already been deleted: state: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	W0415 18:23:17.984228   11634 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 18:23:17.984239   11634 start.go:728] Will try again in 5 seconds ...
	I0415 18:23:22.986977   11634 start.go:360] acquireMachinesLock for docker-flags-058000: {Name:mk649406b197f489046529acd32e3c76ffad3226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:23:22.987224   11634 start.go:364] duration metric: took 164.581µs to acquireMachinesLock for "docker-flags-058000"
	I0415 18:23:22.987262   11634 start.go:96] Skipping create...Using existing machine configuration
	I0415 18:23:22.987282   11634 fix.go:54] fixHost starting: 
	I0415 18:23:22.987783   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:23.041180   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:23.041228   11634 fix.go:112] recreateIfNeeded on docker-flags-058000: state= err=unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:23.041247   11634 fix.go:117] machineExists: false. err=machine does not exist
	I0415 18:23:23.063379   11634 out.go:177] * docker "docker-flags-058000" container is missing, will recreate.
	I0415 18:23:23.084793   11634 delete.go:124] DEMOLISHING docker-flags-058000 ...
	I0415 18:23:23.085052   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:23.135333   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	W0415 18:23:23.135391   11634 stop.go:83] unable to get state: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:23.135410   11634 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:23.135803   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:23.183355   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:23.183415   11634 delete.go:82] Unable to get host status for docker-flags-058000, assuming it has already been deleted: state: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:23.183496   11634 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-058000
	W0415 18:23:23.231615   11634 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-058000 returned with exit code 1
	I0415 18:23:23.231649   11634 kic.go:371] could not find the container docker-flags-058000 to remove it. will try anyways
	I0415 18:23:23.231752   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:23.279740   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	W0415 18:23:23.279785   11634 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:23.279872   11634 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-058000 /bin/bash -c "sudo init 0"
	W0415 18:23:23.326893   11634 cli_runner.go:211] docker exec --privileged -t docker-flags-058000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 18:23:23.326921   11634 oci.go:650] error shutdown docker-flags-058000: docker exec --privileged -t docker-flags-058000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:24.328318   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:24.379786   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:24.379830   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:24.379840   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:24.379865   11634 retry.go:31] will retry after 607.25392ms: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:24.989457   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:25.043977   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:25.044026   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:25.044035   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:25.044060   11634 retry.go:31] will retry after 1.054743462s: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:26.099828   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:26.151293   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:26.151341   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:26.151354   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:26.151381   11634 retry.go:31] will retry after 1.684406441s: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:27.836221   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:27.887962   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:27.888006   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:27.888017   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:27.888043   11634 retry.go:31] will retry after 997.009544ms: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:28.886158   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:28.939401   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:28.939446   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:28.939459   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:28.939482   11634 retry.go:31] will retry after 1.275437133s: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:30.217244   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:30.268566   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:30.268621   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:30.268632   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:30.268656   11634 retry.go:31] will retry after 3.043073589s: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:33.312686   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:33.364630   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:33.364674   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:33.364684   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:33.364709   11634 retry.go:31] will retry after 8.536957352s: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:41.903891   11634 cli_runner.go:164] Run: docker container inspect docker-flags-058000 --format={{.State.Status}}
	W0415 18:23:41.956552   11634 cli_runner.go:211] docker container inspect docker-flags-058000 --format={{.State.Status}} returned with exit code 1
	I0415 18:23:41.956597   11634 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:23:41.956606   11634 oci.go:664] temporary error: container docker-flags-058000 status is  but expect it to be exited
	I0415 18:23:41.956645   11634 oci.go:88] couldn't shut down docker-flags-058000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	 
	I0415 18:23:41.956729   11634 cli_runner.go:164] Run: docker rm -f -v docker-flags-058000
	I0415 18:23:42.005816   11634 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-058000
	W0415 18:23:42.053976   11634 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-058000 returned with exit code 1
	I0415 18:23:42.054071   11634 cli_runner.go:164] Run: docker network inspect docker-flags-058000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:23:42.102087   11634 cli_runner.go:164] Run: docker network rm docker-flags-058000
	I0415 18:23:42.203925   11634 fix.go:124] Sleeping 1 second for extra luck!
	I0415 18:23:43.205072   11634 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:23:43.228206   11634 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:23:43.228329   11634 start.go:159] libmachine.API.Create for "docker-flags-058000" (driver="docker")
	I0415 18:23:43.228351   11634 client.go:168] LocalClient.Create starting
	I0415 18:23:43.228496   11634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:23:43.228561   11634 main.go:141] libmachine: Decoding PEM data...
	I0415 18:23:43.228578   11634 main.go:141] libmachine: Parsing certificate...
	I0415 18:23:43.228637   11634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:23:43.228686   11634 main.go:141] libmachine: Decoding PEM data...
	I0415 18:23:43.228696   11634 main.go:141] libmachine: Parsing certificate...
	I0415 18:23:43.249693   11634 cli_runner.go:164] Run: docker network inspect docker-flags-058000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:23:43.299380   11634 cli_runner.go:211] docker network inspect docker-flags-058000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:23:43.299489   11634 network_create.go:281] running [docker network inspect docker-flags-058000] to gather additional debugging logs...
	I0415 18:23:43.299514   11634 cli_runner.go:164] Run: docker network inspect docker-flags-058000
	W0415 18:23:43.347648   11634 cli_runner.go:211] docker network inspect docker-flags-058000 returned with exit code 1
	I0415 18:23:43.347685   11634 network_create.go:284] error running [docker network inspect docker-flags-058000]: docker network inspect docker-flags-058000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-058000 not found
	I0415 18:23:43.347699   11634 network_create.go:286] output of [docker network inspect docker-flags-058000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-058000 not found
	
	** /stderr **
	I0415 18:23:43.347811   11634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:23:43.397239   11634 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:23:43.398594   11634 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:23:43.399968   11634 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:23:43.401380   11634 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:23:43.403015   11634 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:23:43.404772   11634 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:23:43.405199   11634 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021e9b90}
	I0415 18:23:43.405214   11634 network_create.go:124] attempt to create docker network docker-flags-058000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0415 18:23:43.405322   11634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-058000 docker-flags-058000
	I0415 18:23:43.509698   11634 network_create.go:108] docker network docker-flags-058000 192.168.103.0/24 created
	I0415 18:23:43.509740   11634 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-058000" container
	I0415 18:23:43.509848   11634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:23:43.561151   11634 cli_runner.go:164] Run: docker volume create docker-flags-058000 --label name.minikube.sigs.k8s.io=docker-flags-058000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:23:43.609725   11634 oci.go:103] Successfully created a docker volume docker-flags-058000
	I0415 18:23:43.609837   11634 cli_runner.go:164] Run: docker run --rm --name docker-flags-058000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-058000 --entrypoint /usr/bin/test -v docker-flags-058000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:23:43.848008   11634 oci.go:107] Successfully prepared a docker volume docker-flags-058000
	I0415 18:23:43.848057   11634 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:23:43.848070   11634 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:23:43.848178   11634 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-058000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:29:43.227782   11634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:29:43.227904   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:43.279180   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:43.279288   11634 retry.go:31] will retry after 134.938362ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:43.414669   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:43.467879   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:43.467994   11634 retry.go:31] will retry after 385.862912ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:43.856284   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:43.907659   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:43.907758   11634 retry.go:31] will retry after 484.073676ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:44.394170   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:44.447598   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	W0415 18:29:44.447703   11634 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	
	W0415 18:29:44.447730   11634 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:44.447786   11634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:29:44.447846   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:44.495517   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:44.495612   11634 retry.go:31] will retry after 146.237685ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:44.643545   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:44.696697   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:44.696792   11634 retry.go:31] will retry after 333.012141ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:45.031664   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:45.082558   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:45.082652   11634 retry.go:31] will retry after 716.638584ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:45.801696   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:45.853635   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:45.853735   11634 retry.go:31] will retry after 468.161182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:46.324330   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:46.376922   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	W0415 18:29:46.377024   11634 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	
	W0415 18:29:46.377049   11634 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:46.377062   11634 start.go:128] duration metric: took 6m3.174864693s to createHost
	I0415 18:29:46.377138   11634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:29:46.377190   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:46.427546   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:46.427636   11634 retry.go:31] will retry after 228.064864ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:46.656135   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:46.707573   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:46.707664   11634 retry.go:31] will retry after 291.186543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:47.000728   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:47.051862   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:47.051955   11634 retry.go:31] will retry after 348.562131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:47.402975   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:47.455268   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	W0415 18:29:47.455366   11634 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	
	W0415 18:29:47.455391   11634 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:47.455446   11634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:29:47.455505   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:47.503173   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:47.503264   11634 retry.go:31] will retry after 312.292181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:47.816852   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:47.868814   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:47.868914   11634 retry.go:31] will retry after 230.744893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:48.100172   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:48.153509   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	I0415 18:29:48.153614   11634 retry.go:31] will retry after 675.257252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:48.830589   11634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000
	W0415 18:29:48.882564   11634 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000 returned with exit code 1
	W0415 18:29:48.882660   11634 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	
	W0415 18:29:48.882679   11634 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-058000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-058000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	I0415 18:29:48.882690   11634 fix.go:56] duration metric: took 6m25.898559156s for fixHost
	I0415 18:29:48.882695   11634 start.go:83] releasing machines lock for "docker-flags-058000", held for 6m25.898603609s
	W0415 18:29:48.882768   11634 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-058000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-058000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 18:29:48.925183   11634 out.go:177] 
	W0415 18:29:48.946403   11634 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 18:29:48.946462   11634 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 18:29:48.946494   11634 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 18:29:48.968469   11634 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-058000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-058000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-058000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (198.415709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-058000 host status: state: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-058000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-058000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-058000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (198.697608ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-058000 host status: state: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-058000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-058000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-15 18:29:49.440292 -0700 PDT m=+6785.862974620
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-058000
helpers_test.go:235: (dbg) docker inspect docker-flags-058000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-058000",
	        "Id": "c56ca61113a2627a050630ded8df5975652952fa16a204facdd774b826e220ed",
	        "Created": "2024-04-16T01:23:43.470471989Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-058000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-058000 -n docker-flags-058000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-058000 -n docker-flags-058000: exit status 7 (111.624889ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 18:29:49.601906   12310 status.go:249] status error: host: state: unknown state "docker-flags-058000": docker container inspect docker-flags-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-058000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-058000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-058000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-058000
--- FAIL: TestDockerFlags (756.94s)

                                                
                                    
x
+
TestForceSystemdFlag (757.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-313000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-313000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m36.324657291s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-313000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-313000" primary control-plane node in "force-systemd-flag-313000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-313000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 18:16:29.161777   11487 out.go:291] Setting OutFile to fd 1 ...
	I0415 18:16:29.161953   11487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:16:29.161958   11487 out.go:304] Setting ErrFile to fd 2...
	I0415 18:16:29.161962   11487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:16:29.162150   11487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 18:16:29.163636   11487 out.go:298] Setting JSON to false
	I0415 18:16:29.186392   11487 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6360,"bootTime":1713223829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 18:16:29.186473   11487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:16:29.208778   11487 out.go:177] * [force-systemd-flag-313000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 18:16:29.251932   11487 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 18:16:29.251998   11487 notify.go:220] Checking for updates...
	I0415 18:16:29.273777   11487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 18:16:29.294911   11487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 18:16:29.316809   11487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:16:29.337688   11487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 18:16:29.358874   11487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:16:29.380778   11487 config.go:182] Loaded profile config "force-systemd-env-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:16:29.380958   11487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:16:29.436133   11487 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 18:16:29.436308   11487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:16:29.561675   11487 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-16 01:16:29.551170264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:16:29.604562   11487 out.go:177] * Using the docker driver based on user configuration
	I0415 18:16:29.625409   11487 start.go:297] selected driver: docker
	I0415 18:16:29.625440   11487 start.go:901] validating driver "docker" against <nil>
	I0415 18:16:29.625476   11487 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:16:29.629821   11487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:16:29.737389   11487 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-16 01:16:29.727009608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:16:29.737556   11487 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:16:29.737751   11487 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 18:16:29.759720   11487 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 18:16:29.781371   11487 cni.go:84] Creating CNI manager for ""
	I0415 18:16:29.781415   11487 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:16:29.781441   11487 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 18:16:29.781544   11487 start.go:340] cluster config:
	{Name:force-systemd-flag-313000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:16:29.803050   11487 out.go:177] * Starting "force-systemd-flag-313000" primary control-plane node in "force-systemd-flag-313000" cluster
	I0415 18:16:29.845133   11487 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 18:16:29.868104   11487 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 18:16:29.910351   11487 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:16:29.910430   11487 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:16:29.910437   11487 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 18:16:29.910457   11487 cache.go:56] Caching tarball of preloaded images
	I0415 18:16:29.910697   11487 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:16:29.910718   11487 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:16:29.910840   11487 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/force-systemd-flag-313000/config.json ...
	I0415 18:16:29.910903   11487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/force-systemd-flag-313000/config.json: {Name:mk01d59241921e82265619911479f21be3d57dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:16:29.963321   11487 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 18:16:29.963338   11487 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 18:16:29.963359   11487 cache.go:194] Successfully downloaded all kic artifacts
	I0415 18:16:29.963409   11487 start.go:360] acquireMachinesLock for force-systemd-flag-313000: {Name:mkdd9392e066fb1152e915eede87b79a270901e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:16:29.963590   11487 start.go:364] duration metric: took 167.007µs to acquireMachinesLock for "force-systemd-flag-313000"
	I0415 18:16:29.963631   11487 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-313000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-313000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:16:29.963772   11487 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:16:30.006307   11487 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:16:30.006759   11487 start.go:159] libmachine.API.Create for "force-systemd-flag-313000" (driver="docker")
	I0415 18:16:30.006808   11487 client.go:168] LocalClient.Create starting
	I0415 18:16:30.006985   11487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:16:30.007084   11487 main.go:141] libmachine: Decoding PEM data...
	I0415 18:16:30.007118   11487 main.go:141] libmachine: Parsing certificate...
	I0415 18:16:30.007211   11487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:16:30.007288   11487 main.go:141] libmachine: Decoding PEM data...
	I0415 18:16:30.007304   11487 main.go:141] libmachine: Parsing certificate...
	I0415 18:16:30.008181   11487 cli_runner.go:164] Run: docker network inspect force-systemd-flag-313000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:16:30.057380   11487 cli_runner.go:211] docker network inspect force-systemd-flag-313000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:16:30.057482   11487 network_create.go:281] running [docker network inspect force-systemd-flag-313000] to gather additional debugging logs...
	I0415 18:16:30.057496   11487 cli_runner.go:164] Run: docker network inspect force-systemd-flag-313000
	W0415 18:16:30.105124   11487 cli_runner.go:211] docker network inspect force-systemd-flag-313000 returned with exit code 1
	I0415 18:16:30.105156   11487 network_create.go:284] error running [docker network inspect force-systemd-flag-313000]: docker network inspect force-systemd-flag-313000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-313000 not found
	I0415 18:16:30.105177   11487 network_create.go:286] output of [docker network inspect force-systemd-flag-313000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-313000 not found
	
	** /stderr **
	I0415 18:16:30.105293   11487 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:16:30.155258   11487 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:16:30.156829   11487 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:16:30.157212   11487 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00215f490}
	I0415 18:16:30.157228   11487 network_create.go:124] attempt to create docker network force-systemd-flag-313000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 18:16:30.157300   11487 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-313000 force-systemd-flag-313000
	I0415 18:16:30.242023   11487 network_create.go:108] docker network force-systemd-flag-313000 192.168.67.0/24 created
	I0415 18:16:30.242058   11487 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-313000" container
	I0415 18:16:30.242177   11487 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:16:30.292254   11487 cli_runner.go:164] Run: docker volume create force-systemd-flag-313000 --label name.minikube.sigs.k8s.io=force-systemd-flag-313000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:16:30.342669   11487 oci.go:103] Successfully created a docker volume force-systemd-flag-313000
	I0415 18:16:30.342800   11487 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-313000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-313000 --entrypoint /usr/bin/test -v force-systemd-flag-313000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:16:30.686982   11487 oci.go:107] Successfully prepared a docker volume force-systemd-flag-313000
	I0415 18:16:30.687028   11487 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:16:30.687046   11487 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:16:30.687192   11487 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-313000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:22:30.080906   11487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:22:30.081039   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:30.134227   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:30.134363   11487 retry.go:31] will retry after 154.992779ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:30.290721   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:30.344764   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:30.344875   11487 retry.go:31] will retry after 437.477754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:30.784803   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:30.838471   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:30.838582   11487 retry.go:31] will retry after 717.470093ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:31.557525   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:31.610412   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	W0415 18:22:31.610521   11487 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	
	W0415 18:22:31.610541   11487 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:31.610598   11487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:22:31.610656   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:31.660544   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:31.660636   11487 retry.go:31] will retry after 157.25001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:31.820249   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:31.870698   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:31.870799   11487 retry.go:31] will retry after 516.220704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:32.388277   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:32.442283   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:32.442391   11487 retry.go:31] will retry after 634.117519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:33.077580   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:22:33.130949   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	W0415 18:22:33.131051   11487 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	
	W0415 18:22:33.131068   11487 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:33.131081   11487 start.go:128] duration metric: took 6m3.094644082s to createHost
	I0415 18:22:33.131090   11487 start.go:83] releasing machines lock for "force-systemd-flag-313000", held for 6m3.09483947s
	W0415 18:22:33.131105   11487 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 18:22:33.131539   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:33.181112   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:33.181166   11487 delete.go:82] Unable to get host status for force-systemd-flag-313000, assuming it has already been deleted: state: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	W0415 18:22:33.181247   11487 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 18:22:33.181258   11487 start.go:728] Will try again in 5 seconds ...
	I0415 18:22:38.183537   11487 start.go:360] acquireMachinesLock for force-systemd-flag-313000: {Name:mkdd9392e066fb1152e915eede87b79a270901e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:22:38.183751   11487 start.go:364] duration metric: took 172.08µs to acquireMachinesLock for "force-systemd-flag-313000"
	I0415 18:22:38.183793   11487 start.go:96] Skipping create...Using existing machine configuration
	I0415 18:22:38.183808   11487 fix.go:54] fixHost starting: 
	I0415 18:22:38.184235   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:38.233969   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:38.234030   11487 fix.go:112] recreateIfNeeded on force-systemd-flag-313000: state= err=unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:38.234053   11487 fix.go:117] machineExists: false. err=machine does not exist
	I0415 18:22:38.256956   11487 out.go:177] * docker "force-systemd-flag-313000" container is missing, will recreate.
	I0415 18:22:38.299424   11487 delete.go:124] DEMOLISHING force-systemd-flag-313000 ...
	I0415 18:22:38.299613   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:38.350190   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	W0415 18:22:38.350246   11487 stop.go:83] unable to get state: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:38.350264   11487 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:38.350658   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:38.401463   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:38.401518   11487 delete.go:82] Unable to get host status for force-systemd-flag-313000, assuming it has already been deleted: state: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:38.401600   11487 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-313000
	W0415 18:22:38.474719   11487 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:38.474764   11487 kic.go:371] could not find the container force-systemd-flag-313000 to remove it. will try anyways
	I0415 18:22:38.474853   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:38.524428   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	W0415 18:22:38.524493   11487 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:38.524570   11487 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-313000 /bin/bash -c "sudo init 0"
	W0415 18:22:38.571683   11487 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-313000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 18:22:38.571716   11487 oci.go:650] error shutdown force-systemd-flag-313000: docker exec --privileged -t force-systemd-flag-313000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:39.573859   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:39.639176   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:39.639224   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:39.639235   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:39.639267   11487 retry.go:31] will retry after 545.307804ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:40.184929   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:40.237398   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:40.237452   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:40.237466   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:40.237491   11487 retry.go:31] will retry after 937.466503ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:41.177267   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:41.230950   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:41.230997   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:41.231007   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:41.231034   11487 retry.go:31] will retry after 882.785785ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:42.114304   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:42.164085   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:42.164131   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:42.164140   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:42.164164   11487 retry.go:31] will retry after 2.185322612s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:44.351842   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:44.404518   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:44.404573   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:44.404582   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:44.404605   11487 retry.go:31] will retry after 3.121387689s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:47.527591   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:47.578883   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:47.578938   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:47.578953   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:47.578976   11487 retry.go:31] will retry after 4.65154763s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:52.231176   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:52.281908   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:52.281958   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:52.281970   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:52.281997   11487 retry.go:31] will retry after 4.603424018s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:56.887383   11487 cli_runner.go:164] Run: docker container inspect force-systemd-flag-313000 --format={{.State.Status}}
	W0415 18:22:56.941574   11487 cli_runner.go:211] docker container inspect force-systemd-flag-313000 --format={{.State.Status}} returned with exit code 1
	I0415 18:22:56.941625   11487 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:22:56.941636   11487 oci.go:664] temporary error: container force-systemd-flag-313000 status is  but expect it to be exited
	I0415 18:22:56.941674   11487 oci.go:88] couldn't shut down force-systemd-flag-313000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	 
	I0415 18:22:56.941746   11487 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-313000
	I0415 18:22:56.990922   11487 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-313000
	W0415 18:22:57.039299   11487 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:57.039413   11487 cli_runner.go:164] Run: docker network inspect force-systemd-flag-313000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:22:57.088552   11487 cli_runner.go:164] Run: docker network rm force-systemd-flag-313000
	I0415 18:22:57.195182   11487 fix.go:124] Sleeping 1 second for extra luck!
	I0415 18:22:58.196112   11487 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:22:58.219372   11487 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:22:58.219541   11487 start.go:159] libmachine.API.Create for "force-systemd-flag-313000" (driver="docker")
	I0415 18:22:58.219566   11487 client.go:168] LocalClient.Create starting
	I0415 18:22:58.219797   11487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:22:58.219902   11487 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:58.219926   11487 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:58.220013   11487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:22:58.220088   11487 main.go:141] libmachine: Decoding PEM data...
	I0415 18:22:58.220102   11487 main.go:141] libmachine: Parsing certificate...
	I0415 18:22:58.241509   11487 cli_runner.go:164] Run: docker network inspect force-systemd-flag-313000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:22:58.293314   11487 cli_runner.go:211] docker network inspect force-systemd-flag-313000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:22:58.293403   11487 network_create.go:281] running [docker network inspect force-systemd-flag-313000] to gather additional debugging logs...
	I0415 18:22:58.293423   11487 cli_runner.go:164] Run: docker network inspect force-systemd-flag-313000
	W0415 18:22:58.342917   11487 cli_runner.go:211] docker network inspect force-systemd-flag-313000 returned with exit code 1
	I0415 18:22:58.342949   11487 network_create.go:284] error running [docker network inspect force-systemd-flag-313000]: docker network inspect force-systemd-flag-313000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-313000 not found
	I0415 18:22:58.342964   11487 network_create.go:286] output of [docker network inspect force-systemd-flag-313000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-313000 not found
	
	** /stderr **
	I0415 18:22:58.343111   11487 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:22:58.393530   11487 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:22:58.395090   11487 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:22:58.396573   11487 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:22:58.398231   11487 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:22:58.399620   11487 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:22:58.399973   11487 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00230d970}
	I0415 18:22:58.399986   11487 network_create.go:124] attempt to create docker network force-systemd-flag-313000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0415 18:22:58.400056   11487 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-313000 force-systemd-flag-313000
	I0415 18:22:58.507830   11487 network_create.go:108] docker network force-systemd-flag-313000 192.168.94.0/24 created
	I0415 18:22:58.507870   11487 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-313000" container
	I0415 18:22:58.507971   11487 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:22:58.558610   11487 cli_runner.go:164] Run: docker volume create force-systemd-flag-313000 --label name.minikube.sigs.k8s.io=force-systemd-flag-313000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:22:58.606240   11487 oci.go:103] Successfully created a docker volume force-systemd-flag-313000
	I0415 18:22:58.606358   11487 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-313000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-313000 --entrypoint /usr/bin/test -v force-systemd-flag-313000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:22:58.849641   11487 oci.go:107] Successfully prepared a docker volume force-systemd-flag-313000
	I0415 18:22:58.849676   11487 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:22:58.849689   11487 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:22:58.849803   11487 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-313000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:28:58.219042   11487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:28:58.219167   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:28:58.271174   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:28:58.271293   11487 retry.go:31] will retry after 287.822464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:28:58.561535   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:28:58.615209   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:28:58.615318   11487 retry.go:31] will retry after 265.439712ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:28:58.882156   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:28:58.936957   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:28:58.937081   11487 retry.go:31] will retry after 471.806681ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:28:59.409378   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:28:59.460369   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:28:59.460474   11487 retry.go:31] will retry after 633.494388ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:00.095597   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:00.146791   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	W0415 18:29:00.146904   11487 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	
	W0415 18:29:00.146933   11487 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:00.146991   11487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:29:00.147043   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:00.195360   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:00.195464   11487 retry.go:31] will retry after 257.790632ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:00.455469   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:00.510416   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:00.510511   11487 retry.go:31] will retry after 188.592612ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:00.699500   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:00.752626   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:00.752746   11487 retry.go:31] will retry after 424.235614ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:01.177717   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:01.228008   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:01.228103   11487 retry.go:31] will retry after 516.96244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:01.747422   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:01.800219   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	W0415 18:29:01.800330   11487 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	
	W0415 18:29:01.800349   11487 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:01.800360   11487 start.go:128] duration metric: took 6m3.607167277s to createHost
	I0415 18:29:01.800432   11487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:29:01.800484   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:01.852530   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:01.852623   11487 retry.go:31] will retry after 270.640324ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:02.125446   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:02.176374   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:02.176475   11487 retry.go:31] will retry after 461.703924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:02.640562   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:02.693651   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:02.693747   11487 retry.go:31] will retry after 408.118115ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:03.102258   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:03.154339   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:03.154433   11487 retry.go:31] will retry after 622.109303ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:03.777892   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:03.829426   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	W0415 18:29:03.829525   11487 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	
	W0415 18:29:03.829543   11487 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:03.829597   11487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:29:03.829653   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:03.878104   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:03.878197   11487 retry.go:31] will retry after 245.683934ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:04.124628   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:04.175270   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:04.175363   11487 retry.go:31] will retry after 487.205704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:04.664004   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:04.718665   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	I0415 18:29:04.718757   11487 retry.go:31] will retry after 571.079582ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:05.292205   11487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000
	W0415 18:29:05.344032   11487 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000 returned with exit code 1
	W0415 18:29:05.344132   11487 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	
	W0415 18:29:05.344151   11487 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-313000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-313000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	I0415 18:29:05.344165   11487 fix.go:56] duration metric: took 6m27.163515953s for fixHost
	I0415 18:29:05.344174   11487 start.go:83] releasing machines lock for "force-systemd-flag-313000", held for 6m27.163567387s
	W0415 18:29:05.344258   11487 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-313000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-313000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 18:29:05.387669   11487 out.go:177] 
	W0415 18:29:05.409005   11487 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 18:29:05.409066   11487 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 18:29:05.409106   11487 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 18:29:05.430855   11487 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-313000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-313000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-313000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (199.965033ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-313000 host status: state: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-313000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-15 18:29:05.710074 -0700 PDT m=+6742.132399877
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-313000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-313000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-313000",
	        "Id": "0a2cb34e319fdbc4b137a60e031189a661988b599e45b58b355cf5fa01de14b6",
	        "Created": "2024-04-16T01:22:58.467822099Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-313000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-313000 -n force-systemd-flag-313000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-313000 -n force-systemd-flag-313000: exit status 7 (111.960954ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 18:29:05.871696   12175 status.go:249] status error: host: state: unknown state "force-systemd-flag-313000": docker container inspect force-systemd-flag-313000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-313000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-313000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-313000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-313000
--- FAIL: TestForceSystemdFlag (757.42s)

                                                
                                    
x
+
TestForceSystemdEnv (758.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-357000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0415 18:05:04.961464    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:05:15.027367    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 18:08:08.077190    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:10:04.960123    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:10:15.025315    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 18:13:18.073335    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 18:15:04.959747    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:15:15.025163    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-357000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.765202168s)

                                                
                                                
-- stdout --
	* [force-systemd-env-357000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-357000" primary control-plane node in "force-systemd-env-357000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-357000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 18:04:34.526385   10721 out.go:291] Setting OutFile to fd 1 ...
	I0415 18:04:34.526587   10721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:04:34.526592   10721 out.go:304] Setting ErrFile to fd 2...
	I0415 18:04:34.526595   10721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:04:34.526797   10721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 18:04:34.528244   10721 out.go:298] Setting JSON to false
	I0415 18:04:34.550519   10721 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5645,"bootTime":1713223829,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 18:04:34.550612   10721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:04:34.572707   10721 out.go:177] * [force-systemd-env-357000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 18:04:34.593512   10721 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 18:04:34.593552   10721 notify.go:220] Checking for updates...
	I0415 18:04:34.638382   10721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 18:04:34.659469   10721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 18:04:34.680491   10721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:04:34.701363   10721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 18:04:34.722617   10721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0415 18:04:34.744377   10721 config.go:182] Loaded profile config "offline-docker-189000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:04:34.744529   10721 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:04:34.799288   10721 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 18:04:34.799474   10721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:04:34.905123   10721 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:106 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-16 01:04:34.894388088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:04:34.947736   10721 out.go:177] * Using the docker driver based on user configuration
	I0415 18:04:34.968635   10721 start.go:297] selected driver: docker
	I0415 18:04:34.968658   10721 start.go:901] validating driver "docker" against <nil>
	I0415 18:04:34.968675   10721 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:04:34.973137   10721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:04:35.081921   10721 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:106 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-16 01:04:35.071485126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:04:35.082108   10721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:04:35.082292   10721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 18:04:35.103330   10721 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 18:04:35.125340   10721 cni.go:84] Creating CNI manager for ""
	I0415 18:04:35.125385   10721 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:04:35.125403   10721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 18:04:35.125551   10721 start.go:340] cluster config:
	{Name:force-systemd-env-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:04:35.147249   10721 out.go:177] * Starting "force-systemd-env-357000" primary control-plane node in "force-systemd-env-357000" cluster
	I0415 18:04:35.189213   10721 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 18:04:35.211252   10721 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 18:04:35.253277   10721 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:04:35.253312   10721 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 18:04:35.253347   10721 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:04:35.253364   10721 cache.go:56] Caching tarball of preloaded images
	I0415 18:04:35.253587   10721 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:04:35.253606   10721 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:04:35.254475   10721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/force-systemd-env-357000/config.json ...
	I0415 18:04:35.254684   10721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/force-systemd-env-357000/config.json: {Name:mk1db48ece58ebb44967149ea93b26ecbae03f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:04:35.304644   10721 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 18:04:35.304664   10721 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 18:04:35.304684   10721 cache.go:194] Successfully downloaded all kic artifacts
	I0415 18:04:35.304735   10721 start.go:360] acquireMachinesLock for force-systemd-env-357000: {Name:mk12cf3dd54756c3355660f130045bf3676fe88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:04:35.304908   10721 start.go:364] duration metric: took 157.671µs to acquireMachinesLock for "force-systemd-env-357000"
	I0415 18:04:35.304935   10721 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-357000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:04:35.304998   10721 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:04:35.347101   10721 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:04:35.347516   10721 start.go:159] libmachine.API.Create for "force-systemd-env-357000" (driver="docker")
	I0415 18:04:35.347562   10721 client.go:168] LocalClient.Create starting
	I0415 18:04:35.347800   10721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:04:35.347901   10721 main.go:141] libmachine: Decoding PEM data...
	I0415 18:04:35.347935   10721 main.go:141] libmachine: Parsing certificate...
	I0415 18:04:35.348027   10721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:04:35.348101   10721 main.go:141] libmachine: Decoding PEM data...
	I0415 18:04:35.348117   10721 main.go:141] libmachine: Parsing certificate...
	I0415 18:04:35.348988   10721 cli_runner.go:164] Run: docker network inspect force-systemd-env-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:04:35.398476   10721 cli_runner.go:211] docker network inspect force-systemd-env-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:04:35.398580   10721 network_create.go:281] running [docker network inspect force-systemd-env-357000] to gather additional debugging logs...
	I0415 18:04:35.398602   10721 cli_runner.go:164] Run: docker network inspect force-systemd-env-357000
	W0415 18:04:35.445890   10721 cli_runner.go:211] docker network inspect force-systemd-env-357000 returned with exit code 1
	I0415 18:04:35.445919   10721 network_create.go:284] error running [docker network inspect force-systemd-env-357000]: docker network inspect force-systemd-env-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-357000 not found
	I0415 18:04:35.445932   10721 network_create.go:286] output of [docker network inspect force-systemd-env-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-357000 not found
	
	** /stderr **
	I0415 18:04:35.446059   10721 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:04:35.495636   10721 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:04:35.497247   10721 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:04:35.498830   10721 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:04:35.499167   10721 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f48c0}
	I0415 18:04:35.499181   10721 network_create.go:124] attempt to create docker network force-systemd-env-357000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 18:04:35.499255   10721 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-357000 force-systemd-env-357000
	W0415 18:04:35.548050   10721 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-357000 force-systemd-env-357000 returned with exit code 1
	W0415 18:04:35.548085   10721 network_create.go:149] failed to create docker network force-systemd-env-357000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-357000 force-systemd-env-357000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 18:04:35.548106   10721 network_create.go:116] failed to create docker network force-systemd-env-357000 192.168.76.0/24, will retry: subnet is taken
	I0415 18:04:35.549451   10721 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:04:35.549823   10721 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00239dbd0}
	I0415 18:04:35.549839   10721 network_create.go:124] attempt to create docker network force-systemd-env-357000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 18:04:35.549911   10721 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-357000 force-systemd-env-357000
	I0415 18:04:35.633988   10721 network_create.go:108] docker network force-systemd-env-357000 192.168.85.0/24 created
	I0415 18:04:35.634042   10721 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-357000" container
	I0415 18:04:35.634160   10721 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:04:35.684049   10721 cli_runner.go:164] Run: docker volume create force-systemd-env-357000 --label name.minikube.sigs.k8s.io=force-systemd-env-357000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:04:35.733273   10721 oci.go:103] Successfully created a docker volume force-systemd-env-357000
	I0415 18:04:35.733390   10721 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-357000 --entrypoint /usr/bin/test -v force-systemd-env-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:04:36.055442   10721 oci.go:107] Successfully prepared a docker volume force-systemd-env-357000
	I0415 18:04:36.055481   10721 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:04:36.055496   10721 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:04:36.055594   10721 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:10:35.348380   10721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:10:35.348555   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:35.400431   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:35.400544   10721 retry.go:31] will retry after 350.895375ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:35.753345   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:35.804335   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:35.804447   10721 retry.go:31] will retry after 468.481808ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:36.275380   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:36.328273   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:36.328387   10721 retry.go:31] will retry after 839.250403ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:37.170020   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:37.220250   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	W0415 18:10:37.220355   10721 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	
	W0415 18:10:37.220373   10721 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:37.220434   10721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:10:37.220502   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:37.270952   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:37.271061   10721 retry.go:31] will retry after 222.971215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:37.496464   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:37.548319   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:37.548409   10721 retry.go:31] will retry after 304.002988ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:37.854186   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:37.904737   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:37.904835   10721 retry.go:31] will retry after 401.257386ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:38.306417   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:38.354876   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:10:38.354968   10721 retry.go:31] will retry after 499.21063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:38.855093   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:10:38.905753   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	W0415 18:10:38.905857   10721 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	
	W0415 18:10:38.905873   10721 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:38.905901   10721 start.go:128] duration metric: took 6m3.601459325s to createHost
	I0415 18:10:38.905910   10721 start.go:83] releasing machines lock for "force-systemd-env-357000", held for 6m3.601562285s
	W0415 18:10:38.905928   10721 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 18:10:38.906357   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:38.954757   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:38.954814   10721 delete.go:82] Unable to get host status for force-systemd-env-357000, assuming it has already been deleted: state: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	W0415 18:10:38.954918   10721 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 18:10:38.954933   10721 start.go:728] Will try again in 5 seconds ...
	I0415 18:10:43.955468   10721 start.go:360] acquireMachinesLock for force-systemd-env-357000: {Name:mk12cf3dd54756c3355660f130045bf3676fe88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:10:43.955670   10721 start.go:364] duration metric: took 162.688µs to acquireMachinesLock for "force-systemd-env-357000"
	I0415 18:10:43.955702   10721 start.go:96] Skipping create...Using existing machine configuration
	I0415 18:10:43.955719   10721 fix.go:54] fixHost starting: 
	I0415 18:10:43.956134   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:44.008797   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:44.008850   10721 fix.go:112] recreateIfNeeded on force-systemd-env-357000: state= err=unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:44.008870   10721 fix.go:117] machineExists: false. err=machine does not exist
	I0415 18:10:44.052426   10721 out.go:177] * docker "force-systemd-env-357000" container is missing, will recreate.
	I0415 18:10:44.075074   10721 delete.go:124] DEMOLISHING force-systemd-env-357000 ...
	I0415 18:10:44.075181   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:44.184458   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	W0415 18:10:44.184555   10721 stop.go:83] unable to get state: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:44.184575   10721 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:44.185064   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:44.233799   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:44.233848   10721 delete.go:82] Unable to get host status for force-systemd-env-357000, assuming it has already been deleted: state: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:44.233939   10721 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-357000
	W0415 18:10:44.281896   10721 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-357000 returned with exit code 1
	I0415 18:10:44.281931   10721 kic.go:371] could not find the container force-systemd-env-357000 to remove it. will try anyways
	I0415 18:10:44.282020   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:44.329892   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	W0415 18:10:44.329936   10721 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:44.330015   10721 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-357000 /bin/bash -c "sudo init 0"
	W0415 18:10:44.377432   10721 cli_runner.go:211] docker exec --privileged -t force-systemd-env-357000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 18:10:44.377461   10721 oci.go:650] error shutdown force-systemd-env-357000: docker exec --privileged -t force-systemd-env-357000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:45.379863   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:45.432349   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:45.432401   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:45.432410   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:45.432433   10721 retry.go:31] will retry after 479.849854ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:45.914639   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:45.968984   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:45.969029   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:45.969042   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:45.969070   10721 retry.go:31] will retry after 1.119384723s: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:47.089686   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:47.143872   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:47.143921   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:47.143933   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:47.143960   10721 retry.go:31] will retry after 1.515009559s: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:48.660474   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:48.710864   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:48.710914   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:48.710926   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:48.710951   10721 retry.go:31] will retry after 1.873829796s: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:50.586000   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:50.639494   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:50.639542   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:50.639553   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:50.639580   10721 retry.go:31] will retry after 2.103181417s: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:52.744924   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:52.797328   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:52.797378   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:52.797387   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:52.797410   10721 retry.go:31] will retry after 3.095997218s: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:55.895748   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:10:55.949380   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:10:55.949428   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:10:55.949440   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:10:55.949464   10721 retry.go:31] will retry after 8.062564817s: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:11:04.014289   10721 cli_runner.go:164] Run: docker container inspect force-systemd-env-357000 --format={{.State.Status}}
	W0415 18:11:04.066186   10721 cli_runner.go:211] docker container inspect force-systemd-env-357000 --format={{.State.Status}} returned with exit code 1
	I0415 18:11:04.066240   10721 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:11:04.066249   10721 oci.go:664] temporary error: container force-systemd-env-357000 status is  but expect it to be exited
	I0415 18:11:04.066282   10721 oci.go:88] couldn't shut down force-systemd-env-357000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	 
	I0415 18:11:04.066349   10721 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-357000
	I0415 18:11:04.117653   10721 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-357000
	W0415 18:11:04.165812   10721 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-357000 returned with exit code 1
	I0415 18:11:04.165921   10721 cli_runner.go:164] Run: docker network inspect force-systemd-env-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:11:04.214438   10721 cli_runner.go:164] Run: docker network rm force-systemd-env-357000
	I0415 18:11:04.327208   10721 fix.go:124] Sleeping 1 second for extra luck!
	I0415 18:11:05.329482   10721 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:11:05.351294   10721 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:11:05.351481   10721 start.go:159] libmachine.API.Create for "force-systemd-env-357000" (driver="docker")
	I0415 18:11:05.351510   10721 client.go:168] LocalClient.Create starting
	I0415 18:11:05.351738   10721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 18:11:05.351841   10721 main.go:141] libmachine: Decoding PEM data...
	I0415 18:11:05.351875   10721 main.go:141] libmachine: Parsing certificate...
	I0415 18:11:05.351955   10721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 18:11:05.352033   10721 main.go:141] libmachine: Decoding PEM data...
	I0415 18:11:05.352060   10721 main.go:141] libmachine: Parsing certificate...
	I0415 18:11:05.374112   10721 cli_runner.go:164] Run: docker network inspect force-systemd-env-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:11:05.426340   10721 cli_runner.go:211] docker network inspect force-systemd-env-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:11:05.426442   10721 network_create.go:281] running [docker network inspect force-systemd-env-357000] to gather additional debugging logs...
	I0415 18:11:05.426461   10721 cli_runner.go:164] Run: docker network inspect force-systemd-env-357000
	W0415 18:11:05.476054   10721 cli_runner.go:211] docker network inspect force-systemd-env-357000 returned with exit code 1
	I0415 18:11:05.476087   10721 network_create.go:284] error running [docker network inspect force-systemd-env-357000]: docker network inspect force-systemd-env-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-357000 not found
	I0415 18:11:05.476099   10721 network_create.go:286] output of [docker network inspect force-systemd-env-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-357000 not found
	
	** /stderr **
	I0415 18:11:05.476212   10721 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:11:05.526326   10721 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:11:05.527784   10721 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:11:05.529349   10721 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:11:05.530992   10721 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:11:05.532635   10721 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:11:05.534239   10721 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:11:05.534796   10721 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024865c0}
	I0415 18:11:05.534809   10721 network_create.go:124] attempt to create docker network force-systemd-env-357000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0415 18:11:05.534910   10721 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-357000 force-systemd-env-357000
	I0415 18:11:05.619190   10721 network_create.go:108] docker network force-systemd-env-357000 192.168.103.0/24 created
	I0415 18:11:05.619226   10721 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-357000" container
	I0415 18:11:05.619324   10721 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:11:05.669800   10721 cli_runner.go:164] Run: docker volume create force-systemd-env-357000 --label name.minikube.sigs.k8s.io=force-systemd-env-357000 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:11:05.717979   10721 oci.go:103] Successfully created a docker volume force-systemd-env-357000
	I0415 18:11:05.718095   10721 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-357000 --entrypoint /usr/bin/test -v force-systemd-env-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 18:11:05.966722   10721 oci.go:107] Successfully prepared a docker volume force-systemd-env-357000
	I0415 18:11:05.966760   10721 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:11:05.966773   10721 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:11:05.966894   10721 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:17:05.351416   10721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:17:05.351544   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:05.402926   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:05.403050   10721 retry.go:31] will retry after 340.247209ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:05.745707   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:05.797368   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:05.797488   10721 retry.go:31] will retry after 356.865974ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:06.156396   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:06.207348   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:06.207449   10721 retry.go:31] will retry after 805.76557ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:07.015661   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:07.065714   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	W0415 18:17:07.065825   10721 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	
	W0415 18:17:07.065845   10721 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:07.065900   10721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:17:07.065955   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:07.114290   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:07.114394   10721 retry.go:31] will retry after 268.904621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:07.385658   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:07.438455   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:07.438579   10721 retry.go:31] will retry after 235.417676ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:07.676392   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:07.727067   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:07.727165   10721 retry.go:31] will retry after 387.441536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:08.116987   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:08.171898   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:08.171996   10721 retry.go:31] will retry after 744.410672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:08.918812   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:08.971905   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	W0415 18:17:08.972016   10721 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	
	W0415 18:17:08.972033   10721 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:08.972047   10721 start.go:128] duration metric: took 6m3.64309145s to createHost
	I0415 18:17:08.972113   10721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:17:08.972166   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:09.020348   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:09.020454   10721 retry.go:31] will retry after 297.043746ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:09.318864   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:09.369802   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:09.369899   10721 retry.go:31] will retry after 370.640761ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:09.741856   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:09.792925   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:09.793025   10721 retry.go:31] will retry after 607.025878ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:10.401410   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:10.453747   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	W0415 18:17:10.453846   10721 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	
	W0415 18:17:10.453862   10721 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:10.453918   10721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 18:17:10.453980   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:10.502335   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:10.502437   10721 retry.go:31] will retry after 292.012728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:10.795405   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:10.847304   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:10.847405   10721 retry.go:31] will retry after 454.050377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:11.303927   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:11.355288   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	I0415 18:17:11.355399   10721 retry.go:31] will retry after 669.899478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:12.027680   10721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000
	W0415 18:17:12.080594   10721 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000 returned with exit code 1
	W0415 18:17:12.080696   10721 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	
	W0415 18:17:12.080712   10721 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	I0415 18:17:12.080722   10721 fix.go:56] duration metric: took 6m28.125612925s for fixHost
	I0415 18:17:12.080730   10721 start.go:83] releasing machines lock for "force-systemd-env-357000", held for 6m28.125653369s
	W0415 18:17:12.080806   10721 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-357000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-357000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 18:17:12.124434   10721 out.go:177] 
	W0415 18:17:12.145278   10721 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 18:17:12.145315   10721 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 18:17:12.145342   10721 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 18:17:12.166310   10721 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-357000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-357000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-357000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (198.571118ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-357000 host status: state: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-357000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-15 18:17:12.440576 -0700 PDT m=+6028.932421995
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-357000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-357000",
	        "Id": "fb3af53dda24a0b7d5a168235da3088c281fdcdcb622cad3d99ed932d00b3ce0",
	        "Created": "2024-04-16T01:11:05.579611853Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-357000 -n force-systemd-env-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-357000 -n force-systemd-env-357000: exit status 7 (111.906762ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 18:17:12.602115   11608 status.go:249] status error: host: state: unknown state "force-systemd-env-357000": docker container inspect force-systemd-env-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-357000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-357000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-357000
--- FAIL: TestForceSystemdEnv (758.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (872.04s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host
E0415 17:05:04.722392    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:05:14.788540    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:06:37.828650    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:10:04.717171    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:10:14.784277    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:15:04.713289    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:15:14.779217    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host: signal: killed (14m31.606379527s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-004000
helpers_test.go:235: (dbg) docker inspect mount-start-2-004000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a521fefce57622d20c71d0b639aecafd7e3052897952b0404bb0a27ddba0e2bf",
	        "Created": "2024-04-16T00:01:08.978429204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 122294,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-16T00:01:20.029167767Z",
	            "FinishedAt": "2024-04-16T00:01:17.724317495Z"
	        },
	        "Image": "sha256:85e471306d0bb34ec24eefa76ceee5f0e4c46f1efd31247cdf11d7eba2710ed6",
	        "ResolvConfPath": "/var/lib/docker/containers/a521fefce57622d20c71d0b639aecafd7e3052897952b0404bb0a27ddba0e2bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a521fefce57622d20c71d0b639aecafd7e3052897952b0404bb0a27ddba0e2bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/a521fefce57622d20c71d0b639aecafd7e3052897952b0404bb0a27ddba0e2bf/hosts",
	        "LogPath": "/var/lib/docker/containers/a521fefce57622d20c71d0b639aecafd7e3052897952b0404bb0a27ddba0e2bf/a521fefce57622d20c71d0b639aecafd7e3052897952b0404bb0a27ddba0e2bf-json.log",
	        "Name": "/mount-start-2-004000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-004000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-004000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c31b7f29411ed73e938e41137b774da5dbc599e9f4b959f5eb74533aacf095fb-init/diff:/var/lib/docker/overlay2/208f9be37678a07b207c9c644fd6a0378bbda6fbfec3be606048d2ffc5318b3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c31b7f29411ed73e938e41137b774da5dbc599e9f4b959f5eb74533aacf095fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c31b7f29411ed73e938e41137b774da5dbc599e9f4b959f5eb74533aacf095fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c31b7f29411ed73e938e41137b774da5dbc599e9f4b959f5eb74533aacf095fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-004000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-004000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-004000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-004000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-004000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea08856eeac278e83277d86411324c365b9019bc37c4b6eb733eeaa226ca74f",
	            "SandboxKey": "/var/run/docker/netns/cea08856eeac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51593"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51594"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51595"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51596"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51597"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-004000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "84bedd55a2fbc5b6ecf755d59bc685e0e2bf4df7f8c95aea99b39a5abec3af37",
	                    "EndpointID": "ee1a7418b3496b5b2ccb2293f750eb7005a2cff890745fee9560870bf8ede90b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-004000",
	                        "a521fefce576"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-004000 -n mount-start-2-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-004000 -n mount-start-2-004000: exit status 6 (373.15193ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:15:59.902404    8631 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-004000" does not appear in /Users/jenkins/minikube-integration/18647-976/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-004000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (872.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (752.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-243000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0415 17:18:07.820481    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:20:04.707396    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:20:14.772696    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:23:17.939578    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:25:04.831016    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:25:14.896330    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-243000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m32.040440972s)

                                                
                                                
-- stdout --
	* [multinode-243000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-243000" primary control-plane node in "multinode-243000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-243000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:17:08.995784    8735 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:17:08.995965    8735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:17:08.995970    8735 out.go:304] Setting ErrFile to fd 2...
	I0415 17:17:08.995974    8735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:17:08.996150    8735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:17:08.997638    8735 out.go:298] Setting JSON to false
	I0415 17:17:09.021406    8735 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2800,"bootTime":1713223829,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 17:17:09.021500    8735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:17:09.043679    8735 out.go:177] * [multinode-243000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 17:17:09.085462    8735 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 17:17:09.085523    8735 notify.go:220] Checking for updates...
	I0415 17:17:09.128095    8735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 17:17:09.149413    8735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 17:17:09.170521    8735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:17:09.192166    8735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 17:17:09.213390    8735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:17:09.234762    8735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:17:09.290998    8735 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:17:09.291172    8735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:17:09.399342    8735 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-16 00:17:09.388247045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:17:09.441589    8735 out.go:177] * Using the docker driver based on user configuration
	I0415 17:17:09.463526    8735 start.go:297] selected driver: docker
	I0415 17:17:09.463557    8735 start.go:901] validating driver "docker" against <nil>
	I0415 17:17:09.463574    8735 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:17:09.468210    8735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:17:09.574945    8735 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-16 00:17:09.564919651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:17:09.575162    8735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:17:09.575332    8735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 17:17:09.597078    8735 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 17:17:09.618160    8735 cni.go:84] Creating CNI manager for ""
	I0415 17:17:09.618192    8735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 17:17:09.618203    8735 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 17:17:09.618323    8735 start.go:340] cluster config:
	{Name:multinode-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-243000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:17:09.640047    8735 out.go:177] * Starting "multinode-243000" primary control-plane node in "multinode-243000" cluster
	I0415 17:17:09.682052    8735 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:17:09.703994    8735 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 17:17:09.746185    8735 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:17:09.746254    8735 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 17:17:09.746258    8735 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:17:09.746276    8735 cache.go:56] Caching tarball of preloaded images
	I0415 17:17:09.746496    8735 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 17:17:09.746515    8735 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 17:17:09.748169    8735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/multinode-243000/config.json ...
	I0415 17:17:09.748271    8735 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/multinode-243000/config.json: {Name:mk69e1610236d15b8269bcd243854fa0b65b7bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:17:09.798889    8735 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 17:17:09.798908    8735 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 17:17:09.798928    8735 cache.go:194] Successfully downloaded all kic artifacts
	I0415 17:17:09.798968    8735 start.go:360] acquireMachinesLock for multinode-243000: {Name:mk4161ad8ce629d0c03264b515abcdde42d39cc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:17:09.799513    8735 start.go:364] duration metric: took 530.093µs to acquireMachinesLock for "multinode-243000"
	I0415 17:17:09.799541    8735 start.go:93] Provisioning new machine with config: &{Name:multinode-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-243000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 17:17:09.799626    8735 start.go:125] createHost starting for "" (driver="docker")
	I0415 17:17:09.841879    8735 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 17:17:09.842235    8735 start.go:159] libmachine.API.Create for "multinode-243000" (driver="docker")
	I0415 17:17:09.842284    8735 client.go:168] LocalClient.Create starting
	I0415 17:17:09.842514    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 17:17:09.842616    8735 main.go:141] libmachine: Decoding PEM data...
	I0415 17:17:09.842645    8735 main.go:141] libmachine: Parsing certificate...
	I0415 17:17:09.842745    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 17:17:09.842820    8735 main.go:141] libmachine: Decoding PEM data...
	I0415 17:17:09.842850    8735 main.go:141] libmachine: Parsing certificate...
	I0415 17:17:09.843698    8735 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 17:17:09.893098    8735 cli_runner.go:211] docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 17:17:09.893222    8735 network_create.go:281] running [docker network inspect multinode-243000] to gather additional debugging logs...
	I0415 17:17:09.893243    8735 cli_runner.go:164] Run: docker network inspect multinode-243000
	W0415 17:17:09.940462    8735 cli_runner.go:211] docker network inspect multinode-243000 returned with exit code 1
	I0415 17:17:09.940490    8735 network_create.go:284] error running [docker network inspect multinode-243000]: docker network inspect multinode-243000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-243000 not found
	I0415 17:17:09.940501    8735 network_create.go:286] output of [docker network inspect multinode-243000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-243000 not found
	
	** /stderr **
	I0415 17:17:09.940670    8735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:17:09.990909    8735 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:17:09.992528    8735 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:17:09.992880    8735 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002313760}
	I0415 17:17:09.992896    8735 network_create.go:124] attempt to create docker network multinode-243000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 17:17:09.992964    8735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	I0415 17:17:10.078769    8735 network_create.go:108] docker network multinode-243000 192.168.67.0/24 created
	I0415 17:17:10.078811    8735 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-243000" container
	I0415 17:17:10.078926    8735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 17:17:10.127401    8735 cli_runner.go:164] Run: docker volume create multinode-243000 --label name.minikube.sigs.k8s.io=multinode-243000 --label created_by.minikube.sigs.k8s.io=true
	I0415 17:17:10.176700    8735 oci.go:103] Successfully created a docker volume multinode-243000
	I0415 17:17:10.176822    8735 cli_runner.go:164] Run: docker run --rm --name multinode-243000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-243000 --entrypoint /usr/bin/test -v multinode-243000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 17:17:10.513526    8735 oci.go:107] Successfully prepared a docker volume multinode-243000
	I0415 17:17:10.513564    8735 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:17:10.513579    8735 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 17:17:10.513672    8735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-243000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 17:23:09.963861    8735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:23:09.963999    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:10.015883    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:23:10.016012    8735 retry.go:31] will retry after 363.602263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:10.381965    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:10.434978    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:23:10.435082    8735 retry.go:31] will retry after 510.112193ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:10.946649    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:10.998034    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:23:10.998127    8735 retry.go:31] will retry after 787.884817ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:11.787732    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:11.840628    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:23:11.840729    8735 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:23:11.840746    8735 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:11.840817    8735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:23:11.840869    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:11.888808    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:23:11.888903    8735 retry.go:31] will retry after 363.937872ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:12.255236    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:12.308488    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:23:12.308580    8735 retry.go:31] will retry after 520.526609ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:12.830286    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:12.881786    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:23:12.881886    8735 retry.go:31] will retry after 686.712541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:13.571004    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:23:13.624867    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:23:13.624973    8735 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:23:13.624988    8735 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:13.625014    8735 start.go:128] duration metric: took 6m3.706228252s to createHost
	I0415 17:23:13.625020    8735 start.go:83] releasing machines lock for "multinode-243000", held for 6m3.706351383s
	W0415 17:23:13.625037    8735 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 17:23:13.625449    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:13.674611    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:13.674677    8735 delete.go:82] Unable to get host status for multinode-243000, assuming it has already been deleted: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	W0415 17:23:13.674795    8735 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 17:23:13.674803    8735 start.go:728] Will try again in 5 seconds ...
	I0415 17:23:18.676052    8735 start.go:360] acquireMachinesLock for multinode-243000: {Name:mk4161ad8ce629d0c03264b515abcdde42d39cc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:23:18.676256    8735 start.go:364] duration metric: took 163.216µs to acquireMachinesLock for "multinode-243000"
	I0415 17:23:18.676293    8735 start.go:96] Skipping create...Using existing machine configuration
	I0415 17:23:18.676310    8735 fix.go:54] fixHost starting: 
	I0415 17:23:18.676735    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:18.731211    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:18.731253    8735 fix.go:112] recreateIfNeeded on multinode-243000: state= err=unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:18.731272    8735 fix.go:117] machineExists: false. err=machine does not exist
	I0415 17:23:18.753281    8735 out.go:177] * docker "multinode-243000" container is missing, will recreate.
	I0415 17:23:18.794817    8735 delete.go:124] DEMOLISHING multinode-243000 ...
	I0415 17:23:18.795023    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:18.843797    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:23:18.843855    8735 stop.go:83] unable to get state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:18.843872    8735 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:18.844254    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:18.892992    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:18.893039    8735 delete.go:82] Unable to get host status for multinode-243000, assuming it has already been deleted: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:18.893120    8735 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:23:18.940405    8735 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:23:18.940443    8735 kic.go:371] could not find the container multinode-243000 to remove it. will try anyways
	I0415 17:23:18.940525    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:18.988136    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:23:18.988187    8735 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:18.988272    8735 cli_runner.go:164] Run: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0"
	W0415 17:23:19.057767    8735 cli_runner.go:211] docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 17:23:19.057797    8735 oci.go:650] error shutdown multinode-243000: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:20.060195    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:20.113224    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:20.113270    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:20.113284    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:20.113311    8735 retry.go:31] will retry after 600.138662ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:20.715766    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:20.767168    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:20.767219    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:20.767232    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:20.767259    8735 retry.go:31] will retry after 1.020127277s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:21.789549    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:21.842031    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:21.842085    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:21.842101    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:21.842130    8735 retry.go:31] will retry after 819.004574ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:22.663511    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:22.715471    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:22.715515    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:22.715529    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:22.715555    8735 retry.go:31] will retry after 1.22788283s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:23.945823    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:24.080775    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:24.080827    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:24.080839    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:24.080861    8735 retry.go:31] will retry after 1.524319878s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:25.607524    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:25.659901    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:25.659946    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:25.659961    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:25.659985    8735 retry.go:31] will retry after 4.5073514s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:30.168324    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:30.222519    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:30.222584    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:30.222593    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:30.222619    8735 retry.go:31] will retry after 3.53986846s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:33.763647    8735 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:23:33.814484    8735 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:23:33.814529    8735 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:23:33.814540    8735 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:23:33.814572    8735 oci.go:88] couldn't shut down multinode-243000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	 
	I0415 17:23:33.814648    8735 cli_runner.go:164] Run: docker rm -f -v multinode-243000
	I0415 17:23:33.864310    8735 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:23:33.911892    8735 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:23:33.912001    8735 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:23:33.960643    8735 cli_runner.go:164] Run: docker network rm multinode-243000
	I0415 17:23:34.113251    8735 fix.go:124] Sleeping 1 second for extra luck!
	I0415 17:23:35.115445    8735 start.go:125] createHost starting for "" (driver="docker")
	I0415 17:23:35.137378    8735 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 17:23:35.137555    8735 start.go:159] libmachine.API.Create for "multinode-243000" (driver="docker")
	I0415 17:23:35.137580    8735 client.go:168] LocalClient.Create starting
	I0415 17:23:35.137806    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 17:23:35.137911    8735 main.go:141] libmachine: Decoding PEM data...
	I0415 17:23:35.137945    8735 main.go:141] libmachine: Parsing certificate...
	I0415 17:23:35.138025    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 17:23:35.138114    8735 main.go:141] libmachine: Decoding PEM data...
	I0415 17:23:35.138130    8735 main.go:141] libmachine: Parsing certificate...
	I0415 17:23:35.158779    8735 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 17:23:35.210411    8735 cli_runner.go:211] docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 17:23:35.210500    8735 network_create.go:281] running [docker network inspect multinode-243000] to gather additional debugging logs...
	I0415 17:23:35.210520    8735 cli_runner.go:164] Run: docker network inspect multinode-243000
	W0415 17:23:35.258579    8735 cli_runner.go:211] docker network inspect multinode-243000 returned with exit code 1
	I0415 17:23:35.258604    8735 network_create.go:284] error running [docker network inspect multinode-243000]: docker network inspect multinode-243000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-243000 not found
	I0415 17:23:35.258615    8735 network_create.go:286] output of [docker network inspect multinode-243000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-243000 not found
	
	** /stderr **
	I0415 17:23:35.258739    8735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:23:35.308993    8735 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:23:35.310616    8735 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:23:35.312201    8735 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:23:35.312530    8735 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024033b0}
	I0415 17:23:35.312543    8735 network_create.go:124] attempt to create docker network multinode-243000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 17:23:35.312610    8735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	W0415 17:23:35.361371    8735 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000 returned with exit code 1
	W0415 17:23:35.361403    8735 network_create.go:149] failed to create docker network multinode-243000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 17:23:35.361420    8735 network_create.go:116] failed to create docker network multinode-243000 192.168.76.0/24, will retry: subnet is taken
	I0415 17:23:35.362794    8735 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:23:35.363233    8735 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000887dc0}
	I0415 17:23:35.363246    8735 network_create.go:124] attempt to create docker network multinode-243000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 17:23:35.363336    8735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	I0415 17:23:35.447745    8735 network_create.go:108] docker network multinode-243000 192.168.85.0/24 created
	I0415 17:23:35.447781    8735 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-243000" container
	I0415 17:23:35.447899    8735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 17:23:35.496794    8735 cli_runner.go:164] Run: docker volume create multinode-243000 --label name.minikube.sigs.k8s.io=multinode-243000 --label created_by.minikube.sigs.k8s.io=true
	I0415 17:23:35.544341    8735 oci.go:103] Successfully created a docker volume multinode-243000
	I0415 17:23:35.544461    8735 cli_runner.go:164] Run: docker run --rm --name multinode-243000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-243000 --entrypoint /usr/bin/test -v multinode-243000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 17:23:35.782648    8735 oci.go:107] Successfully prepared a docker volume multinode-243000
	I0415 17:23:35.782682    8735 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:23:35.782695    8735 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 17:23:35.782800    8735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-243000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 17:29:35.137753    8735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:29:35.137873    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:35.188991    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:35.189104    8735 retry.go:31] will retry after 233.548604ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:35.425064    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:35.476414    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:35.476522    8735 retry.go:31] will retry after 347.709876ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:35.824993    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:35.875715    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:35.875812    8735 retry.go:31] will retry after 386.421026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:36.264600    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:36.315519    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:29:36.315625    8735 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:29:36.315641    8735 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:36.315699    8735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:29:36.315758    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:36.365996    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:36.366098    8735 retry.go:31] will retry after 284.08052ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:36.651407    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:36.705114    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:36.705210    8735 retry.go:31] will retry after 370.662537ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:37.077371    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:37.130276    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:37.130382    8735 retry.go:31] will retry after 364.807182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:37.497602    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:37.551898    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:29:37.551998    8735 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:29:37.552016    8735 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:37.552027    8735 start.go:128] duration metric: took 6m2.436724983s to createHost
	I0415 17:29:37.552090    8735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:29:37.552143    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:37.601304    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:37.601391    8735 retry.go:31] will retry after 335.184841ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:37.938921    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:37.990691    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:37.990793    8735 retry.go:31] will retry after 504.783307ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:38.498010    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:38.551365    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:38.551460    8735 retry.go:31] will retry after 528.322725ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:39.081586    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:39.149684    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:29:39.149784    8735 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:29:39.149798    8735 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:39.149856    8735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:29:39.149916    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:39.197850    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:39.197939    8735 retry.go:31] will retry after 142.943231ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:39.342160    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:39.394742    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:39.394835    8735 retry.go:31] will retry after 300.721091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:39.696139    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:39.748266    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:39.748363    8735 retry.go:31] will retry after 532.881315ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:40.283619    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:40.337314    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:29:40.337406    8735 retry.go:31] will retry after 558.207759ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:40.896434    8735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:29:40.947806    8735 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:29:40.947907    8735 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:29:40.947919    8735 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:29:40.947929    8735 fix.go:56] duration metric: took 6m22.271824507s for fixHost
	I0415 17:29:40.947934    8735 start.go:83] releasing machines lock for "multinode-243000", held for 6m22.271869535s
	W0415 17:29:40.948023    8735 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-243000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-243000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 17:29:40.989437    8735 out.go:177] 
	W0415 17:29:41.010696    8735 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 17:29:41.010752    8735 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 17:29:41.010777    8735 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 17:29:41.032622    8735 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-243000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (111.22147ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:29:41.269955    9037 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (752.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (97.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (107.721146ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-243000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- rollout status deployment/busybox: exit status 1 (100.655447ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.956852ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.516244ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.772097ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.20656ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.815984ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.372083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.099445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0415 17:30:04.830113    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:30:14.894300    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.292119ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.585379ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.168637ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.786585ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.678285ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- exec  -- nslookup kubernetes.io: exit status 1 (100.564055ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- exec  -- nslookup kubernetes.default: exit status 1 (101.010941ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (99.833817ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.767488ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:18.548640    9132 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (97.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-243000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.508516ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-243000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.318419ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:18.814081    9141 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-243000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-243000 -v 3 --alsologtostderr: exit status 80 (202.102128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:18.875854    9145 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:18.876731    9145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:18.876739    9145 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:18.876743    9145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:18.876916    9145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:18.877245    9145 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:18.878431    9145 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:18.878803    9145 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:18.927468    9145 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:18.949798    9145 out.go:177] 
	W0415 17:31:18.971571    9145 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-243000 host status: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-243000 host status: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	I0415 17:31:18.993233    9145 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-243000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (110.905702ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:19.179277    9151 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-243000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-243000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.074025ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-243000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-243000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-243000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (110.27533ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:19.379763    9158 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-243000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-004000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-243000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-243000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-243000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.137694ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:19.729844    9170 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status --output json --alsologtostderr: exit status 7 (112.932927ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-243000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:19.791630    9174 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:19.792281    9174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:19.792290    9174 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:19.792296    9174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:19.792903    9174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:19.793090    9174 out.go:298] Setting JSON to true
	I0415 17:31:19.793116    9174 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:19.793153    9174 notify.go:220] Checking for updates...
	I0415 17:31:19.793392    9174 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:19.793406    9174 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:19.793779    9174 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:19.842789    9174 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:19.842853    9174 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:19.842872    9174 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:19.842894    9174 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:19.842904    9174 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-243000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.859271ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:20.007578    9180 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 node stop m03: exit status 85 (156.450117ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-243000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status: exit status 7 (112.306102ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:20.277109    9186 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:20.277120    9186 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr: exit status 7 (111.324925ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:20.338924    9190 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:20.339089    9190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:20.339094    9190 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:20.339097    9190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:20.339275    9190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:20.339447    9190 out.go:298] Setting JSON to false
	I0415 17:31:20.339470    9190 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:20.339506    9190 notify.go:220] Checking for updates...
	I0415 17:31:20.339740    9190 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:20.339757    9190 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:20.340143    9190 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:20.388412    9190 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:20.388471    9190 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:20.388491    9190 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:20.388509    9190 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:20.388517    9190 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr": multinode-243000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr": multinode-243000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr": multinode-243000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.747096ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:31:20.552937    9196 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 node start m03 -v=7 --alsologtostderr: exit status 85 (154.353201ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:20.615360    9200 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:20.615738    9200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:20.615744    9200 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:20.615747    9200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:20.615940    9200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:20.616251    9200 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:20.616516    9200 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:20.637828    9200 out.go:177] 
	W0415 17:31:20.659833    9200 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0415 17:31:20.659857    9200 out.go:239] * 
	* 
	W0415 17:31:20.663950    9200 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 17:31:20.685427    9200 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0415 17:31:20.615360    9200 out.go:291] Setting OutFile to fd 1 ...
I0415 17:31:20.615738    9200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:31:20.615744    9200 out.go:304] Setting ErrFile to fd 2...
I0415 17:31:20.615747    9200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:31:20.615940    9200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
I0415 17:31:20.616251    9200 mustload.go:65] Loading cluster: multinode-243000
I0415 17:31:20.616516    9200 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:31:20.637828    9200 out.go:177] 
W0415 17:31:20.659833    9200 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0415 17:31:20.659857    9200 out.go:239] * 
* 
W0415 17:31:20.663950    9200 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 17:31:20.685427    9200 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-243000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (112.221819ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:20.770613    9202 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:20.770815    9202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:20.770820    9202 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:20.770824    9202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:20.771007    9202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:20.771170    9202 out.go:298] Setting JSON to false
	I0415 17:31:20.771194    9202 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:20.771226    9202 notify.go:220] Checking for updates...
	I0415 17:31:20.771461    9202 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:20.771478    9202 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:20.771848    9202 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:20.819860    9202 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:20.819911    9202 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:20.819933    9202 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:20.819949    9202 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:20.819957    9202 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (120.03111ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:21.818176    9206 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:21.818443    9206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:21.818448    9206 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:21.818452    9206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:21.818635    9206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:21.818803    9206 out.go:298] Setting JSON to false
	I0415 17:31:21.818825    9206 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:21.818860    9206 notify.go:220] Checking for updates...
	I0415 17:31:21.819109    9206 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:21.819129    9206 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:21.819508    9206 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:21.870225    9206 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:21.870296    9206 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:21.870314    9206 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:21.870336    9206 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:21.870343    9206 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (118.590399ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:22.753772    9210 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:22.753956    9210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:22.753962    9210 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:22.753966    9210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:22.754150    9210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:22.754335    9210 out.go:298] Setting JSON to false
	I0415 17:31:22.754359    9210 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:22.754391    9210 notify.go:220] Checking for updates...
	I0415 17:31:22.754636    9210 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:22.754653    9210 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:22.755089    9210 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:22.804671    9210 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:22.804730    9210 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:22.804748    9210 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:22.804770    9210 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:22.804777    9210 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (119.881915ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:24.896898    9214 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:24.897164    9214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:24.897169    9214 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:24.897173    9214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:24.897359    9214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:24.897533    9214 out.go:298] Setting JSON to false
	I0415 17:31:24.897556    9214 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:24.897599    9214 notify.go:220] Checking for updates...
	I0415 17:31:24.898896    9214 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:24.898915    9214 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:24.899292    9214 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:24.951730    9214 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:24.951777    9214 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:24.951796    9214 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:24.951815    9214 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:24.951827    9214 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (121.154542ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:29.999197    9218 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:29.999460    9218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:29.999465    9218 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:29.999488    9218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:29.999699    9218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:29.999940    9218 out.go:298] Setting JSON to false
	I0415 17:31:29.999982    9218 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:30.000023    9218 notify.go:220] Checking for updates...
	I0415 17:31:30.001259    9218 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:30.001279    9218 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:30.001707    9218 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:30.054560    9218 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:30.054628    9218 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:30.054650    9218 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:30.054672    9218 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:30.054679    9218 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (115.070665ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:35.956672    9222 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:35.956948    9222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:35.956954    9222 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:35.956958    9222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:35.957132    9222 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:35.957301    9222 out.go:298] Setting JSON to false
	I0415 17:31:35.957323    9222 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:35.957361    9222 notify.go:220] Checking for updates...
	I0415 17:31:35.957585    9222 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:35.957601    9222 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:35.958085    9222 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:36.005614    9222 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:36.005658    9222 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:36.005678    9222 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:36.005694    9222 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:36.005702    9222 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (117.271043ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:41.146024    9228 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:41.146238    9228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:41.146243    9228 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:41.146247    9228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:41.146472    9228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:41.146672    9228 out.go:298] Setting JSON to false
	I0415 17:31:41.146714    9228 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:41.146749    9228 notify.go:220] Checking for updates...
	I0415 17:31:41.146984    9228 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:41.147001    9228 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:41.148348    9228 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:41.197252    9228 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:41.197297    9228 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:41.197319    9228 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:41.197337    9228 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:41.197344    9228 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (122.637124ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:31:54.514291    9235 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:31:54.514564    9235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:54.514570    9235 out.go:304] Setting ErrFile to fd 2...
	I0415 17:31:54.514574    9235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:31:54.514775    9235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:31:54.514977    9235 out.go:298] Setting JSON to false
	I0415 17:31:54.515014    9235 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:31:54.515338    9235 notify.go:220] Checking for updates...
	I0415 17:31:54.515525    9235 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:31:54.515542    9235 status.go:255] checking status of multinode-243000 ...
	I0415 17:31:54.515982    9235 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:31:54.567061    9235 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:31:54.567106    9235 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:31:54.567127    9235 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:31:54.567143    9235 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:31:54.567153    9235 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr: exit status 7 (121.697067ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:32:15.218247    9240 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:32:15.218580    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:32:15.218585    9240 out.go:304] Setting ErrFile to fd 2...
	I0415 17:32:15.218589    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:32:15.218801    9240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:32:15.219005    9240 out.go:298] Setting JSON to false
	I0415 17:32:15.219053    9240 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:32:15.219089    9240 notify.go:220] Checking for updates...
	I0415 17:32:15.220474    9240 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:32:15.220496    9240 status.go:255] checking status of multinode-243000 ...
	I0415 17:32:15.220975    9240 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:15.271688    9240 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:15.271734    9240 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:32:15.271763    9240 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:32:15.271781    9240 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:32:15.271789    9240 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-243000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "b280bc363a952b5ba073a3f274d14c0c5b9936b7b6f8bec08e29d8ec87b23d60",
	        "Created": "2024-04-16T00:23:35.407425158Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.309091ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:32:15.436658    9246 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (784.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-243000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-243000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-243000: exit status 82 (9.914979749s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-243000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-243000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-243000 --wait=true -v=8 --alsologtostderr
E0415 17:34:47.942565    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:35:04.829501    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:35:14.896103    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:39:57.941028    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:40:04.830551    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:40:14.894784    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:45:04.830425    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:45:14.895737    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-243000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m54.618368352s)

                                                
                                                
-- stdout --
	* [multinode-243000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-243000" primary control-plane node in "multinode-243000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* docker "multinode-243000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-243000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:32:25.476838    9268 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:32:25.477501    9268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:32:25.477510    9268 out.go:304] Setting ErrFile to fd 2...
	I0415 17:32:25.477515    9268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:32:25.477952    9268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:32:25.479732    9268 out.go:298] Setting JSON to false
	I0415 17:32:25.502045    9268 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3716,"bootTime":1713223829,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 17:32:25.502137    9268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:32:25.524320    9268 out.go:177] * [multinode-243000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 17:32:25.546237    9268 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 17:32:25.546278    9268 notify.go:220] Checking for updates...
	I0415 17:32:25.591119    9268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 17:32:25.613188    9268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 17:32:25.634910    9268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:32:25.656259    9268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 17:32:25.678304    9268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:32:25.700751    9268 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:32:25.700919    9268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:32:25.756872    9268 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:32:25.757041    9268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:32:25.861222    9268 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-16 00:32:25.850668881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:32:25.883252    9268 out.go:177] * Using the docker driver based on existing profile
	I0415 17:32:25.905129    9268 start.go:297] selected driver: docker
	I0415 17:32:25.905189    9268 start.go:901] validating driver "docker" against &{Name:multinode-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-243000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:32:25.905352    9268 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:32:25.905571    9268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:32:26.012381    9268 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-16 00:32:26.001451372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:32:26.015410    9268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 17:32:26.015473    9268 cni.go:84] Creating CNI manager for ""
	I0415 17:32:26.015481    9268 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 17:32:26.015547    9268 start.go:340] cluster config:
	{Name:multinode-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-243000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:32:26.037485    9268 out.go:177] * Starting "multinode-243000" primary control-plane node in "multinode-243000" cluster
	I0415 17:32:26.059378    9268 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:32:26.081163    9268 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 17:32:26.123317    9268 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:32:26.123353    9268 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 17:32:26.123391    9268 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:32:26.123408    9268 cache.go:56] Caching tarball of preloaded images
	I0415 17:32:26.123635    9268 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 17:32:26.123655    9268 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 17:32:26.124562    9268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/multinode-243000/config.json ...
	I0415 17:32:26.173269    9268 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 17:32:26.173290    9268 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 17:32:26.173324    9268 cache.go:194] Successfully downloaded all kic artifacts
	I0415 17:32:26.173369    9268 start.go:360] acquireMachinesLock for multinode-243000: {Name:mk4161ad8ce629d0c03264b515abcdde42d39cc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:32:26.173471    9268 start.go:364] duration metric: took 83.076µs to acquireMachinesLock for "multinode-243000"
	I0415 17:32:26.173494    9268 start.go:96] Skipping create...Using existing machine configuration
	I0415 17:32:26.173504    9268 fix.go:54] fixHost starting: 
	I0415 17:32:26.173750    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:26.222003    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:26.222056    9268 fix.go:112] recreateIfNeeded on multinode-243000: state= err=unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:26.222077    9268 fix.go:117] machineExists: false. err=machine does not exist
	I0415 17:32:26.244139    9268 out.go:177] * docker "multinode-243000" container is missing, will recreate.
	I0415 17:32:26.286748    9268 delete.go:124] DEMOLISHING multinode-243000 ...
	I0415 17:32:26.286949    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:26.336648    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:32:26.336693    9268 stop.go:83] unable to get state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:26.336710    9268 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:26.337084    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:26.385314    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:26.385364    9268 delete.go:82] Unable to get host status for multinode-243000, assuming it has already been deleted: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:26.385444    9268 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:32:26.433921    9268 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:32:26.433951    9268 kic.go:371] could not find the container multinode-243000 to remove it. will try anyways
	I0415 17:32:26.434035    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:26.482160    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:32:26.482210    9268 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:26.482291    9268 cli_runner.go:164] Run: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0"
	W0415 17:32:26.529285    9268 cli_runner.go:211] docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 17:32:26.529313    9268 oci.go:650] error shutdown multinode-243000: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:27.530256    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:27.582150    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:27.582192    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:27.582199    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:27.582239    9268 retry.go:31] will retry after 281.62144ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:27.864534    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:27.917023    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:27.917081    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:27.917092    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:27.917114    9268 retry.go:31] will retry after 957.267279ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:28.876759    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:28.927579    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:28.927621    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:28.927638    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:28.927665    9268 retry.go:31] will retry after 1.461063768s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:30.391079    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:30.442459    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:30.442500    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:30.442508    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:30.442533    9268 retry.go:31] will retry after 2.4404337s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:32.883898    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:32.937165    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:32.937210    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:32.937224    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:32.937244    9268 retry.go:31] will retry after 3.7718276s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:36.711505    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:36.763354    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:36.763407    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:36.763422    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:36.763447    9268 retry.go:31] will retry after 1.984957097s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:38.750798    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:38.804662    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:38.804733    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:38.804745    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:38.804766    9268 retry.go:31] will retry after 3.016804869s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:41.822293    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:32:41.875268    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:32:41.875316    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:32:41.875327    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:32:41.875353    9268 oci.go:88] couldn't shut down multinode-243000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	 
	I0415 17:32:41.875422    9268 cli_runner.go:164] Run: docker rm -f -v multinode-243000
	I0415 17:32:41.923031    9268 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:32:41.970940    9268 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:32:41.971091    9268 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:32:42.019972    9268 cli_runner.go:164] Run: docker network rm multinode-243000
	I0415 17:32:42.128888    9268 fix.go:124] Sleeping 1 second for extra luck!
	I0415 17:32:43.129315    9268 start.go:125] createHost starting for "" (driver="docker")
	I0415 17:32:43.152530    9268 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 17:32:43.152725    9268 start.go:159] libmachine.API.Create for "multinode-243000" (driver="docker")
	I0415 17:32:43.152768    9268 client.go:168] LocalClient.Create starting
	I0415 17:32:43.152989    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 17:32:43.153092    9268 main.go:141] libmachine: Decoding PEM data...
	I0415 17:32:43.153125    9268 main.go:141] libmachine: Parsing certificate...
	I0415 17:32:43.153213    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 17:32:43.153293    9268 main.go:141] libmachine: Decoding PEM data...
	I0415 17:32:43.153308    9268 main.go:141] libmachine: Parsing certificate...
	I0415 17:32:43.175487    9268 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 17:32:43.228648    9268 cli_runner.go:211] docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 17:32:43.228738    9268 network_create.go:281] running [docker network inspect multinode-243000] to gather additional debugging logs...
	I0415 17:32:43.228755    9268 cli_runner.go:164] Run: docker network inspect multinode-243000
	W0415 17:32:43.277743    9268 cli_runner.go:211] docker network inspect multinode-243000 returned with exit code 1
	I0415 17:32:43.277774    9268 network_create.go:284] error running [docker network inspect multinode-243000]: docker network inspect multinode-243000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-243000 not found
	I0415 17:32:43.277792    9268 network_create.go:286] output of [docker network inspect multinode-243000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-243000 not found
	
	** /stderr **
	I0415 17:32:43.277902    9268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:32:43.328430    9268 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:32:43.330060    9268 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:32:43.330419    9268 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002105360}
	I0415 17:32:43.330436    9268 network_create.go:124] attempt to create docker network multinode-243000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 17:32:43.330507    9268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	I0415 17:32:43.414427    9268 network_create.go:108] docker network multinode-243000 192.168.67.0/24 created
	I0415 17:32:43.414462    9268 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-243000" container
	I0415 17:32:43.414583    9268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 17:32:43.463705    9268 cli_runner.go:164] Run: docker volume create multinode-243000 --label name.minikube.sigs.k8s.io=multinode-243000 --label created_by.minikube.sigs.k8s.io=true
	I0415 17:32:43.512036    9268 oci.go:103] Successfully created a docker volume multinode-243000
	I0415 17:32:43.512156    9268 cli_runner.go:164] Run: docker run --rm --name multinode-243000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-243000 --entrypoint /usr/bin/test -v multinode-243000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 17:32:43.750607    9268 oci.go:107] Successfully prepared a docker volume multinode-243000
	I0415 17:32:43.750643    9268 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:32:43.750658    9268 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 17:32:43.750771    9268 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-243000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 17:38:43.154940    9268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:38:43.155078    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:43.209146    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:43.209263    9268 retry.go:31] will retry after 339.31894ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:43.548926    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:43.601015    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:43.601109    9268 retry.go:31] will retry after 320.596672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:43.924072    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:43.975114    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:43.975225    9268 retry.go:31] will retry after 805.673863ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:44.783258    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:44.836207    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:38:44.836317    9268 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:38:44.836335    9268 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:44.836397    9268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:38:44.836451    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:44.886522    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:44.886620    9268 retry.go:31] will retry after 248.008939ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:45.137099    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:45.188842    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:45.188945    9268 retry.go:31] will retry after 445.672406ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:45.635842    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:45.685793    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:45.685904    9268 retry.go:31] will retry after 767.0134ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:46.455116    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:46.505467    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:38:46.505579    9268 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:38:46.505596    9268 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:46.505609    9268 start.go:128] duration metric: took 6m3.376393195s to createHost
	I0415 17:38:46.505679    9268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:38:46.505733    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:46.554742    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:46.554832    9268 retry.go:31] will retry after 246.690308ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:46.803769    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:46.853518    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:46.853621    9268 retry.go:31] will retry after 293.875828ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:47.149897    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:47.201261    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:47.201344    9268 retry.go:31] will retry after 833.391272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:48.036436    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:48.088420    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:38:48.088517    9268 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:38:48.088532    9268 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:48.088591    9268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:38:48.088646    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:48.137021    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:48.137112    9268 retry.go:31] will retry after 214.85114ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:48.353613    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:48.402782    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:48.402878    9268 retry.go:31] will retry after 331.056534ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:48.736081    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:48.785597    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:38:48.785691    9268 retry.go:31] will retry after 552.58381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:49.338514    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:38:49.389731    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:38:49.389833    9268 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:38:49.389850    9268 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:49.389860    9268 fix.go:56] duration metric: took 6m23.216561323s for fixHost
	I0415 17:38:49.389875    9268 start.go:83] releasing machines lock for "multinode-243000", held for 6m23.216598988s
	W0415 17:38:49.389891    9268 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 17:38:49.389948    9268 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 17:38:49.389954    9268 start.go:728] Will try again in 5 seconds ...
	I0415 17:38:54.391422    9268 start.go:360] acquireMachinesLock for multinode-243000: {Name:mk4161ad8ce629d0c03264b515abcdde42d39cc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:38:54.391604    9268 start.go:364] duration metric: took 122.821µs to acquireMachinesLock for "multinode-243000"
	I0415 17:38:54.391631    9268 start.go:96] Skipping create...Using existing machine configuration
	I0415 17:38:54.391636    9268 fix.go:54] fixHost starting: 
	I0415 17:38:54.391957    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:54.444579    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:38:54.444621    9268 fix.go:112] recreateIfNeeded on multinode-243000: state= err=unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:54.444636    9268 fix.go:117] machineExists: false. err=machine does not exist
	I0415 17:38:54.466575    9268 out.go:177] * docker "multinode-243000" container is missing, will recreate.
	I0415 17:38:54.507971    9268 delete.go:124] DEMOLISHING multinode-243000 ...
	I0415 17:38:54.508111    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:54.556885    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:38:54.556930    9268 stop.go:83] unable to get state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:54.556949    9268 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:54.557320    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:54.604585    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:38:54.604632    9268 delete.go:82] Unable to get host status for multinode-243000, assuming it has already been deleted: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:54.604718    9268 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:38:54.652721    9268 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:38:54.652750    9268 kic.go:371] could not find the container multinode-243000 to remove it. will try anyways
	I0415 17:38:54.652818    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:54.700673    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:38:54.700717    9268 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:54.700802    9268 cli_runner.go:164] Run: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0"
	W0415 17:38:54.749128    9268 cli_runner.go:211] docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 17:38:54.749158    9268 oci.go:650] error shutdown multinode-243000: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:55.751237    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:55.802187    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:38:55.802229    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:55.802239    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:38:55.802262    9268 retry.go:31] will retry after 647.285911ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:56.450800    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:56.499662    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:38:56.499715    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:56.499726    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:38:56.499746    9268 retry.go:31] will retry after 1.014056883s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:57.516162    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:57.566535    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:38:57.566588    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:57.566600    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:38:57.566623    9268 retry.go:31] will retry after 1.558696142s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:59.126922    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:38:59.177103    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:38:59.177164    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:38:59.177174    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:38:59.177197    9268 retry.go:31] will retry after 1.681648617s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:00.861202    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:39:00.913075    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:39:00.913119    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:00.913127    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:39:00.913156    9268 retry.go:31] will retry after 2.959568871s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:03.873965    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:39:03.925503    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:39:03.925545    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:03.925562    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:39:03.925585    9268 retry.go:31] will retry after 2.359853277s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:06.286698    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:39:06.340327    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:39:06.340369    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:06.340383    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:39:06.340407    9268 retry.go:31] will retry after 6.09129737s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:12.433406    9268 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:39:12.483109    9268 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:39:12.483156    9268 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:39:12.483164    9268 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:39:12.483195    9268 oci.go:88] couldn't shut down multinode-243000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	 
	I0415 17:39:12.483280    9268 cli_runner.go:164] Run: docker rm -f -v multinode-243000
	I0415 17:39:12.532692    9268 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:39:12.580943    9268 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:39:12.581049    9268 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:39:12.628919    9268 cli_runner.go:164] Run: docker network rm multinode-243000
	I0415 17:39:12.730270    9268 fix.go:124] Sleeping 1 second for extra luck!
	I0415 17:39:13.730541    9268 start.go:125] createHost starting for "" (driver="docker")
	I0415 17:39:13.752737    9268 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 17:39:13.752896    9268 start.go:159] libmachine.API.Create for "multinode-243000" (driver="docker")
	I0415 17:39:13.752923    9268 client.go:168] LocalClient.Create starting
	I0415 17:39:13.753155    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 17:39:13.753251    9268 main.go:141] libmachine: Decoding PEM data...
	I0415 17:39:13.753275    9268 main.go:141] libmachine: Parsing certificate...
	I0415 17:39:13.753360    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 17:39:13.753434    9268 main.go:141] libmachine: Decoding PEM data...
	I0415 17:39:13.753450    9268 main.go:141] libmachine: Parsing certificate...
	I0415 17:39:13.754054    9268 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 17:39:13.802685    9268 cli_runner.go:211] docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 17:39:13.802789    9268 network_create.go:281] running [docker network inspect multinode-243000] to gather additional debugging logs...
	I0415 17:39:13.802817    9268 cli_runner.go:164] Run: docker network inspect multinode-243000
	W0415 17:39:13.850998    9268 cli_runner.go:211] docker network inspect multinode-243000 returned with exit code 1
	I0415 17:39:13.851032    9268 network_create.go:284] error running [docker network inspect multinode-243000]: docker network inspect multinode-243000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-243000 not found
	I0415 17:39:13.851044    9268 network_create.go:286] output of [docker network inspect multinode-243000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-243000 not found
	
	** /stderr **
	I0415 17:39:13.851186    9268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:39:13.901391    9268 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:39:13.902959    9268 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:39:13.904265    9268 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:39:13.904584    9268 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a26e0}
	I0415 17:39:13.904596    9268 network_create.go:124] attempt to create docker network multinode-243000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 17:39:13.904664    9268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	W0415 17:39:13.953784    9268 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000 returned with exit code 1
	W0415 17:39:13.953819    9268 network_create.go:149] failed to create docker network multinode-243000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 17:39:13.953840    9268 network_create.go:116] failed to create docker network multinode-243000 192.168.76.0/24, will retry: subnet is taken
	I0415 17:39:13.955262    9268 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:39:13.955637    9268 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a3660}
	I0415 17:39:13.955649    9268 network_create.go:124] attempt to create docker network multinode-243000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 17:39:13.955713    9268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	I0415 17:39:14.073841    9268 network_create.go:108] docker network multinode-243000 192.168.85.0/24 created
	I0415 17:39:14.073872    9268 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-243000" container
	I0415 17:39:14.073982    9268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 17:39:14.123006    9268 cli_runner.go:164] Run: docker volume create multinode-243000 --label name.minikube.sigs.k8s.io=multinode-243000 --label created_by.minikube.sigs.k8s.io=true
	I0415 17:39:14.171238    9268 oci.go:103] Successfully created a docker volume multinode-243000
	I0415 17:39:14.171365    9268 cli_runner.go:164] Run: docker run --rm --name multinode-243000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-243000 --entrypoint /usr/bin/test -v multinode-243000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 17:39:14.407081    9268 oci.go:107] Successfully prepared a docker volume multinode-243000
	I0415 17:39:14.407118    9268 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:39:14.407131    9268 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 17:39:14.407248    9268 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-243000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 17:45:13.755129    9268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:45:13.755269    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:13.808232    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:13.808343    9268 retry.go:31] will retry after 142.663712ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:13.952330    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:14.056816    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:14.056947    9268 retry.go:31] will retry after 284.705048ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:14.344067    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:14.395470    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:14.395579    9268 retry.go:31] will retry after 655.168422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:15.051485    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:15.104801    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:45:15.104903    9268 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:45:15.104922    9268 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:15.104980    9268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:45:15.105038    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:15.156075    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:15.156169    9268 retry.go:31] will retry after 341.257825ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:15.499919    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:15.553690    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:15.553790    9268 retry.go:31] will retry after 252.76208ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:15.808737    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:15.859378    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:15.859489    9268 retry.go:31] will retry after 676.979001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:16.536883    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:16.589299    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:45:16.589409    9268 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:45:16.589422    9268 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:16.589435    9268 start.go:128] duration metric: took 6m2.859066741s to createHost
	I0415 17:45:16.589509    9268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:45:16.589562    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:16.638123    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:16.638217    9268 retry.go:31] will retry after 136.378506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:16.775478    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:16.828362    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:16.828472    9268 retry.go:31] will retry after 473.318066ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:17.304167    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:17.355271    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:17.355374    9268 retry.go:31] will retry after 796.389726ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:18.154189    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:18.209320    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:45:18.209420    9268 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:45:18.209440    9268 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:18.209495    9268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:45:18.209546    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:18.259377    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:18.259466    9268 retry.go:31] will retry after 227.790302ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:18.489649    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:18.544163    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:18.544257    9268 retry.go:31] will retry after 288.236155ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:18.834269    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:18.885874    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:18.885966    9268 retry.go:31] will retry after 357.936023ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:19.246050    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:19.298239    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	I0415 17:45:19.298327    9268 retry.go:31] will retry after 535.691512ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:19.835049    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000
	W0415 17:45:19.886190    9268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000 returned with exit code 1
	W0415 17:45:19.886296    9268 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	W0415 17:45:19.886311    9268 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-243000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-243000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:19.886320    9268 fix.go:56] duration metric: took 6m25.494886943s for fixHost
	I0415 17:45:19.886326    9268 start.go:83] releasing machines lock for "multinode-243000", held for 6m25.494916961s
	W0415 17:45:19.886403    9268 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-243000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-243000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 17:45:19.928795    9268 out.go:177] 
	W0415 17:45:19.949961    9268 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 17:45:19.950021    9268 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 17:45:19.950046    9268 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 17:45:19.971970    9268 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-243000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-243000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "12559b3694989c5096ef21388ef4387170fe292b307ef8279d70fc31acf4dede",
	        "Created": "2024-04-16T00:39:14.034459704Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (112.459074ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:45:20.276929    9595 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (784.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 node delete m03: exit status 80 (199.122185ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-243000 host status: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-243000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr: exit status 7 (112.246179ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:45:20.538814    9603 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:45:20.538989    9603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:45:20.538995    9603 out.go:304] Setting ErrFile to fd 2...
	I0415 17:45:20.538998    9603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:45:20.539182    9603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:45:20.539358    9603 out.go:298] Setting JSON to false
	I0415 17:45:20.539381    9603 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:45:20.539418    9603 notify.go:220] Checking for updates...
	I0415 17:45:20.539665    9603 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:45:20.539681    9603 status.go:255] checking status of multinode-243000 ...
	I0415 17:45:20.540119    9603 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:20.588515    9603 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:20.588572    9603 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:45:20.588590    9603 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:45:20.588607    9603 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:45:20.588614    9603 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "12559b3694989c5096ef21388ef4387170fe292b307ef8279d70fc31acf4dede",
	        "Created": "2024-04-16T00:39:14.034459704Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (111.043999ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:45:20.751605    9609 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 stop: exit status 82 (15.362801273s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	* Stopping node "multinode-243000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-243000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-243000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status: exit status 7 (113.69808ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:45:36.228426    9630 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:45:36.228440    9630 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr: exit status 7 (111.31303ms)

                                                
                                                
-- stdout --
	multinode-243000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:45:36.289963    9634 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:45:36.290137    9634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:45:36.290142    9634 out.go:304] Setting ErrFile to fd 2...
	I0415 17:45:36.290145    9634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:45:36.290329    9634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:45:36.290513    9634 out.go:298] Setting JSON to false
	I0415 17:45:36.290535    9634 mustload.go:65] Loading cluster: multinode-243000
	I0415 17:45:36.290571    9634 notify.go:220] Checking for updates...
	I0415 17:45:36.290836    9634 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:45:36.290853    9634 status.go:255] checking status of multinode-243000 ...
	I0415 17:45:36.291224    9634 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:36.339795    9634 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:36.339851    9634 status.go:330] multinode-243000 host status = "" (err=state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	)
	I0415 17:45:36.339875    9634 status.go:257] multinode-243000 status: &{Name:multinode-243000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 17:45:36.339899    9634 status.go:260] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	E0415 17:45:36.339906    9634 status.go:263] The "multinode-243000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr": multinode-243000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-243000 status --alsologtostderr": multinode-243000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "12559b3694989c5096ef21388ef4387170fe292b307ef8279d70fc31acf4dede",
	        "Created": "2024-04-16T00:39:14.034459704Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (111.553366ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:45:36.503830    9640 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (92.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-243000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-243000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m32.552382398s)

                                                
                                                
-- stdout --
	* [multinode-243000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-243000" primary control-plane node in "multinode-243000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* docker "multinode-243000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 17:45:36.564357    9644 out.go:291] Setting OutFile to fd 1 ...
	I0415 17:45:36.564610    9644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:45:36.564615    9644 out.go:304] Setting ErrFile to fd 2...
	I0415 17:45:36.564619    9644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:45:36.564796    9644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 17:45:36.566266    9644 out.go:298] Setting JSON to false
	I0415 17:45:36.589477    9644 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4507,"bootTime":1713223829,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 17:45:36.589564    9644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:45:36.612270    9644 out.go:177] * [multinode-243000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 17:45:36.654608    9644 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 17:45:36.654657    9644 notify.go:220] Checking for updates...
	I0415 17:45:36.698850    9644 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 17:45:36.720483    9644 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 17:45:36.741733    9644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:45:36.763877    9644 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 17:45:36.785831    9644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:45:36.808584    9644 config.go:182] Loaded profile config "multinode-243000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:45:36.809338    9644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:45:36.864621    9644 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:45:36.864796    9644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:45:36.973660    9644 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-16 00:45:36.962967691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:45:37.021818    9644 out.go:177] * Using the docker driver based on existing profile
	I0415 17:45:37.047765    9644 start.go:297] selected driver: docker
	I0415 17:45:37.047795    9644 start.go:901] validating driver "docker" against &{Name:multinode-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-243000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:45:37.047915    9644 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:45:37.048104    9644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:45:37.156773    9644 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-16 00:45:37.14588822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:45:37.159844    9644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 17:45:37.159908    9644 cni.go:84] Creating CNI manager for ""
	I0415 17:45:37.159917    9644 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 17:45:37.159989    9644 start.go:340] cluster config:
	{Name:multinode-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-243000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:45:37.180777    9644 out.go:177] * Starting "multinode-243000" primary control-plane node in "multinode-243000" cluster
	I0415 17:45:37.202735    9644 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:45:37.223850    9644 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 17:45:37.265620    9644 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:45:37.265691    9644 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 17:45:37.265715    9644 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:45:37.265732    9644 cache.go:56] Caching tarball of preloaded images
	I0415 17:45:37.265977    9644 preload.go:173] Found /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 17:45:37.265997    9644 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 17:45:37.266613    9644 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/multinode-243000/config.json ...
	I0415 17:45:37.317852    9644 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0415 17:45:37.317874    9644 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0415 17:45:37.317893    9644 cache.go:194] Successfully downloaded all kic artifacts
	I0415 17:45:37.318038    9644 start.go:360] acquireMachinesLock for multinode-243000: {Name:mk4161ad8ce629d0c03264b515abcdde42d39cc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:45:37.318144    9644 start.go:364] duration metric: took 82.265µs to acquireMachinesLock for "multinode-243000"
	I0415 17:45:37.318166    9644 start.go:96] Skipping create...Using existing machine configuration
	I0415 17:45:37.318179    9644 fix.go:54] fixHost starting: 
	I0415 17:45:37.318464    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:37.367262    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:37.367328    9644 fix.go:112] recreateIfNeeded on multinode-243000: state= err=unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:37.367347    9644 fix.go:117] machineExists: false. err=machine does not exist
	I0415 17:45:37.389159    9644 out.go:177] * docker "multinode-243000" container is missing, will recreate.
	I0415 17:45:37.430642    9644 delete.go:124] DEMOLISHING multinode-243000 ...
	I0415 17:45:37.430828    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:37.480359    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:45:37.480407    9644 stop.go:83] unable to get state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:37.480425    9644 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:37.480790    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:37.529234    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:37.529308    9644 delete.go:82] Unable to get host status for multinode-243000, assuming it has already been deleted: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:37.529393    9644 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:45:37.576624    9644 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:45:37.576678    9644 kic.go:371] could not find the container multinode-243000 to remove it. will try anyways
	I0415 17:45:37.576745    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:37.625415    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	W0415 17:45:37.625465    9644 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:37.625541    9644 cli_runner.go:164] Run: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0"
	W0415 17:45:37.674077    9644 cli_runner.go:211] docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 17:45:37.674104    9644 oci.go:650] error shutdown multinode-243000: docker exec --privileged -t multinode-243000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:38.674556    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:38.728547    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:38.728595    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:38.728603    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:38.728654    9644 retry.go:31] will retry after 493.296679ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:39.222383    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:39.273327    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:39.273372    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:39.273391    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:39.273417    9644 retry.go:31] will retry after 740.619758ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:40.016369    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:40.068131    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:40.068179    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:40.068191    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:40.068219    9644 retry.go:31] will retry after 827.569716ms: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:40.896390    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:40.948016    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:40.948059    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:40.948078    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:40.948101    9644 retry.go:31] will retry after 1.030414232s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:41.980837    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:42.031615    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:42.031666    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:42.031674    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:42.031697    9644 retry.go:31] will retry after 2.768519277s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:44.802604    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:44.854997    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:44.855040    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:44.855049    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:44.855070    9644 retry.go:31] will retry after 5.496573395s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:50.353990    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:50.408881    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:50.408922    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:50.408931    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:50.408954    9644 retry.go:31] will retry after 3.008109929s: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:53.419390    9644 cli_runner.go:164] Run: docker container inspect multinode-243000 --format={{.State.Status}}
	W0415 17:45:53.471224    9644 cli_runner.go:211] docker container inspect multinode-243000 --format={{.State.Status}} returned with exit code 1
	I0415 17:45:53.471267    9644 oci.go:662] temporary error verifying shutdown: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	I0415 17:45:53.471276    9644 oci.go:664] temporary error: container multinode-243000 status is  but expect it to be exited
	I0415 17:45:53.471310    9644 oci.go:88] couldn't shut down multinode-243000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000
	 
	I0415 17:45:53.471400    9644 cli_runner.go:164] Run: docker rm -f -v multinode-243000
	I0415 17:45:53.520184    9644 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-243000
	W0415 17:45:53.568499    9644 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-243000 returned with exit code 1
	I0415 17:45:53.568607    9644 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:45:53.617465    9644 cli_runner.go:164] Run: docker network rm multinode-243000
	I0415 17:45:53.723718    9644 fix.go:124] Sleeping 1 second for extra luck!
	I0415 17:45:54.725886    9644 start.go:125] createHost starting for "" (driver="docker")
	I0415 17:45:54.747906    9644 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 17:45:54.748095    9644 start.go:159] libmachine.API.Create for "multinode-243000" (driver="docker")
	I0415 17:45:54.748138    9644 client.go:168] LocalClient.Create starting
	I0415 17:45:54.748370    9644 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/ca.pem
	I0415 17:45:54.748467    9644 main.go:141] libmachine: Decoding PEM data...
	I0415 17:45:54.748522    9644 main.go:141] libmachine: Parsing certificate...
	I0415 17:45:54.748621    9644 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18647-976/.minikube/certs/cert.pem
	I0415 17:45:54.748695    9644 main.go:141] libmachine: Decoding PEM data...
	I0415 17:45:54.748710    9644 main.go:141] libmachine: Parsing certificate...
	I0415 17:45:54.769189    9644 cli_runner.go:164] Run: docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 17:45:54.820673    9644 cli_runner.go:211] docker network inspect multinode-243000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 17:45:54.820767    9644 network_create.go:281] running [docker network inspect multinode-243000] to gather additional debugging logs...
	I0415 17:45:54.820782    9644 cli_runner.go:164] Run: docker network inspect multinode-243000
	W0415 17:45:54.869964    9644 cli_runner.go:211] docker network inspect multinode-243000 returned with exit code 1
	I0415 17:45:54.870004    9644 network_create.go:284] error running [docker network inspect multinode-243000]: docker network inspect multinode-243000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-243000 not found
	I0415 17:45:54.870015    9644 network_create.go:286] output of [docker network inspect multinode-243000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-243000 not found
	
	** /stderr **
	I0415 17:45:54.870123    9644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 17:45:54.920173    9644 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:45:54.921687    9644 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 17:45:54.922194    9644 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002489030}
	I0415 17:45:54.922210    9644 network_create.go:124] attempt to create docker network multinode-243000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 17:45:54.922288    9644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-243000 multinode-243000
	I0415 17:45:55.005699    9644 network_create.go:108] docker network multinode-243000 192.168.67.0/24 created
	I0415 17:45:55.005738    9644 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-243000" container
	I0415 17:45:55.005834    9644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 17:45:55.055972    9644 cli_runner.go:164] Run: docker volume create multinode-243000 --label name.minikube.sigs.k8s.io=multinode-243000 --label created_by.minikube.sigs.k8s.io=true
	I0415 17:45:55.104152    9644 oci.go:103] Successfully created a docker volume multinode-243000
	I0415 17:45:55.104263    9644 cli_runner.go:164] Run: docker run --rm --name multinode-243000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-243000 --entrypoint /usr/bin/test -v multinode-243000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 17:45:55.348431    9644 oci.go:107] Successfully prepared a docker volume multinode-243000
	I0415 17:45:55.348466    9644 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:45:55.348480    9644 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 17:45:55.348597    9644 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-243000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-243000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-243000
helpers_test.go:235: (dbg) docker inspect multinode-243000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-243000",
	        "Id": "094ba7aa20d8730d814581da1af4a8e435f0a9808e14c479feb60482fb656e0d",
	        "Created": "2024-04-16T00:45:54.965848263Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-243000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-243000 -n multinode-243000: exit status 7 (113.237864ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:47:09.228826    9741 status.go:249] status error: host: state: unknown state "multinode-243000": docker container inspect multinode-243000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-243000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-243000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (92.73s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-330000 --memory=2048 --driver=docker 
E0415 17:50:04.830364    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:50:14.893653    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:51:27.944338    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-330000 --memory=2048 --driver=docker : signal: killed (5m0.004110949s)

                                                
                                                
-- stdout --
	* [scheduled-stop-330000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-330000" primary control-plane node in "scheduled-stop-330000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-330000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-330000" primary control-plane node in "scheduled-stop-330000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-15 17:53:48.453495 -0700 PDT m=+4625.076113305
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-330000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-330000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-330000",
	        "Id": "075d146f82b4da5dd4c306170ecc6b02f0001beedce9058c25a17afef9323598",
	        "Created": "2024-04-16T00:48:49.512144137Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-330000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-330000 -n scheduled-stop-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-330000 -n scheduled-stop-330000: exit status 7 (116.367479ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:53:48.624903   10210 status.go:249] status error: host: state: unknown state "scheduled-stop-330000": docker container inspect scheduled-stop-330000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-330000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-330000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-330000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-330000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.9s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1660039732 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1660039732 version: (1.410133733s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-844000 --memory=2600 --driver=docker 
E0415 17:55:04.828363    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 17:55:14.896349    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 17:56:37.941172    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-844000 --memory=2600 --driver=docker : signal: killed (4m57.654553417s)

                                                
                                                
-- stdout --
	* [skaffold-844000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-844000" primary control-plane node in "skaffold-844000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-844000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-844000" primary control-plane node in "skaffold-844000" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-15 17:58:49.484945 -0700 PDT m=+4925.975071281
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-844000
helpers_test.go:235: (dbg) docker inspect skaffold-844000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-844000",
	        "Id": "b7c66fcc0b1ddeeda9cbb5d5ae2bb54f59173b25e621e7ea9a62ef6651315ba8",
	        "Created": "2024-04-16T00:53:52.77844526Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-844000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-844000 -n skaffold-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-844000 -n skaffold-844000: exit status 7 (114.130228ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 17:58:49.651799   10328 status.go:249] status error: host: state: unknown state "skaffold-844000": docker container inspect skaffold-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-844000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-844000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-844000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-844000
--- FAIL: TestSkaffold (300.90s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-740000 --memory=2048 --output=json --wait=true --driver=docker 
E0415 18:00:04.962563    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 18:00:15.027228    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-740000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.005415492s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9525024e-36ca-44c6-819c-901954110645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-740000] minikube v1.33.0-beta.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b285df4-7504-41a0-8a47-32b372465605","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18647"}}
	{"specversion":"1.0","id":"40010973-b929-43f3-a284-5d0ba5e63bba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig"}}
	{"specversion":"1.0","id":"b4c71dd8-0665-4d12-84d1-dc5e9a54bb66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5b83c7c8-2fc8-4dac-b286-f9af36d8cf32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"834eed91-91df-430d-a26a-5d8f762bbe41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube"}}
	{"specversion":"1.0","id":"e267010f-df4c-41fe-8824-6033c210217b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0c798342-9db2-44ee-90aa-1e89c2eadb12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e5cb4ed4-1f9a-4fb3-8426-db334cc06b8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8f58d9cd-0dca-4e51-8bd4-d8ba284cb15a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"35cffca4-d1e2-466f-adc2-7dc9e5c83a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"46106d13-130b-4740-8f93-d91d5ff55c3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-740000\" primary control-plane node in \"insufficient-storage-740000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a661eb4-2f31-4167-9682-ac4fa6b06e27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713215244-18647 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c67cd12-b106-4947-a484-0df143166b22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-740000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-740000 --output=json --layout=cluster: context deadline exceeded (726ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-740000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-740000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    

Test pass (175/216)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.66
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.39
12 TestDownloadOnly/v1.29.3/json-events 10.5
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.32
18 TestDownloadOnly/v1.29.3/DeleteAll 0.66
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.30.0-rc.2/json-events 12.77
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.3
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.65
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.38
29 TestDownloadOnlyKic 1.93
30 TestBinaryMirror 1.69
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.23
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 143.52
40 TestAddons/parallel/InspektorGadget 11.96
41 TestAddons/parallel/MetricsServer 5.88
42 TestAddons/parallel/HelmTiller 10.1
44 TestAddons/parallel/CSI 58.39
45 TestAddons/parallel/Headlamp 13.23
46 TestAddons/parallel/CloudSpanner 6.67
47 TestAddons/parallel/LocalPath 54.13
48 TestAddons/parallel/NvidiaDevicePlugin 5.81
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 11.78
61 TestHyperKitDriverInstallOrUpdate 7.25
64 TestErrorSpam/setup 20.82
65 TestErrorSpam/start 2.08
66 TestErrorSpam/status 1.2
67 TestErrorSpam/pause 1.66
68 TestErrorSpam/unpause 1.79
69 TestErrorSpam/stop 2.9
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 38.63
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.11
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.72
81 TestFunctional/serial/CacheCmd/cache/add_local 1.68
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.1
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 1.03
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.38
89 TestFunctional/serial/ExtraConfig 42.28
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.21
92 TestFunctional/serial/LogsFileCmd 3.06
93 TestFunctional/serial/InvalidService 4.39
95 TestFunctional/parallel/ConfigCmd 0.61
96 TestFunctional/parallel/DashboardCmd 9.43
97 TestFunctional/parallel/DryRun 1.69
98 TestFunctional/parallel/InternationalLanguage 0.62
99 TestFunctional/parallel/StatusCmd 1.19
104 TestFunctional/parallel/AddonsCmd 0.28
105 TestFunctional/parallel/PersistentVolumeClaim 27.18
107 TestFunctional/parallel/SSHCmd 0.83
108 TestFunctional/parallel/CpCmd 2.44
109 TestFunctional/parallel/MySQL 32.07
110 TestFunctional/parallel/FileSync 0.41
111 TestFunctional/parallel/CertSync 2.56
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 0.43
120 TestFunctional/parallel/Version/short 0.15
121 TestFunctional/parallel/Version/components 0.71
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.9
127 TestFunctional/parallel/ImageCommands/Setup 1.98
128 TestFunctional/parallel/DockerEnv/bash 1.58
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.32
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.34
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.98
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.61
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.95
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.65
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.32
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.25
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.04
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
150 TestFunctional/parallel/ServiceCmd/DeployApp 8.17
151 TestFunctional/parallel/ServiceCmd/List 1.02
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.02
153 TestFunctional/parallel/ServiceCmd/HTTPS 15
154 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
155 TestFunctional/parallel/ProfileCmd/profile_list 0.53
156 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
157 TestFunctional/parallel/MountCmd/any-port 7.63
158 TestFunctional/parallel/ServiceCmd/Format 15
159 TestFunctional/parallel/MountCmd/specific-port 2.39
160 TestFunctional/parallel/MountCmd/VerifyCleanup 2.85
161 TestFunctional/parallel/ServiceCmd/URL 15
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestMultiControlPlane/serial/StartCluster 104.11
169 TestMultiControlPlane/serial/DeployApp 5.26
170 TestMultiControlPlane/serial/PingHostFromPods 1.44
171 TestMultiControlPlane/serial/AddWorkerNode 19.52
172 TestMultiControlPlane/serial/NodeLabels 0.06
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
174 TestMultiControlPlane/serial/CopyFile 24.46
175 TestMultiControlPlane/serial/StopSecondaryNode 11.93
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
177 TestMultiControlPlane/serial/RestartSecondaryNode 34.63
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 215.44
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.98
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
182 TestMultiControlPlane/serial/StopCluster 33.04
183 TestMultiControlPlane/serial/RestartCluster 86.6
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
185 TestMultiControlPlane/serial/AddSecondaryNode 36.81
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
189 TestImageBuild/serial/Setup 21.29
190 TestImageBuild/serial/NormalBuild 1.75
191 TestImageBuild/serial/BuildWithBuildArg 0.97
192 TestImageBuild/serial/BuildWithDockerIgnore 0.81
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.81
197 TestJSONOutput/start/Command 34.51
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.6
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.62
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 10.83
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.76
222 TestKicCustomNetwork/create_custom_network 24.57
223 TestKicCustomNetwork/use_default_bridge_network 22.87
224 TestKicExistingNetwork 22.42
225 TestKicCustomSubnet 23.66
226 TestKicStaticIP 23.81
227 TestMainNoArgs 0.09
228 TestMinikubeProfile 48.79
231 TestMountStart/serial/StartWithMountFirst 7.32
232 TestMountStart/serial/VerifyMountFirst 0.38
233 TestMountStart/serial/StartWithMountSecond 7.31
234 TestMountStart/serial/VerifyMountSecond 0.37
235 TestMountStart/serial/DeleteFirst 2.05
236 TestMountStart/serial/VerifyMountPostDelete 0.38
237 TestMountStart/serial/Stop 1.54
238 TestMountStart/serial/RestartStopped 9.05
258 TestPreload 98.34
279 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 18.16
280 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.74
x
+
TestDownloadOnly/v1.20.0/json-events (25.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-138000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-138000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (25.567219654s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-138000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-138000: exit status 85 (295.156332ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-138000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:36 PDT |          |
	|         | -p download-only-138000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 16:36:43
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 16:36:43.343102    1445 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:36:43.343314    1445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:36:43.343320    1445 out.go:304] Setting ErrFile to fd 2...
	I0415 16:36:43.343324    1445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:36:43.343525    1445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	W0415 16:36:43.343631    1445 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18647-976/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18647-976/.minikube/config/config.json: no such file or directory
	I0415 16:36:43.345514    1445 out.go:298] Setting JSON to true
	I0415 16:36:43.370647    1445 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":374,"bootTime":1713223829,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 16:36:43.370741    1445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 16:36:43.394772    1445 out.go:97] [download-only-138000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 16:36:43.416967    1445 out.go:169] MINIKUBE_LOCATION=18647
	I0415 16:36:43.395007    1445 notify.go:220] Checking for updates...
	W0415 16:36:43.395007    1445 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 16:36:43.461742    1445 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 16:36:43.482929    1445 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 16:36:43.504214    1445 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 16:36:43.546874    1445 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	W0415 16:36:43.589653    1445 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 16:36:43.590177    1445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 16:36:43.653788    1445 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 16:36:43.653919    1445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:36:43.768161    1445 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:90 SystemTime:2024-04-15 23:36:43.756112825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:36:43.789894    1445 out.go:97] Using the docker driver based on user configuration
	I0415 16:36:43.789928    1445 start.go:297] selected driver: docker
	I0415 16:36:43.789938    1445 start.go:901] validating driver "docker" against <nil>
	I0415 16:36:43.790061    1445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:36:43.911402    1445 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:90 SystemTime:2024-04-15 23:36:43.899676747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:36:43.911603    1445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 16:36:43.915925    1445 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0415 16:36:43.916328    1445 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 16:36:43.937849    1445 out.go:169] Using Docker Desktop driver with root privileges
	I0415 16:36:43.958942    1445 cni.go:84] Creating CNI manager for ""
	I0415 16:36:43.958985    1445 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 16:36:43.959113    1445 start.go:340] cluster config:
	{Name:download-only-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 16:36:43.980853    1445 out.go:97] Starting "download-only-138000" primary control-plane node in "download-only-138000" cluster
	I0415 16:36:43.980902    1445 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 16:36:44.019047    1445 out.go:97] Pulling base image v0.0.43-1713215244-18647 ...
	I0415 16:36:44.019121    1445 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 16:36:44.019180    1445 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 16:36:44.068230    1445 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 16:36:44.068461    1445 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 16:36:44.068599    1445 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 16:36:44.072724    1445 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 16:36:44.072740    1445 cache.go:56] Caching tarball of preloaded images
	I0415 16:36:44.072882    1445 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 16:36:44.094742    1445 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 16:36:44.094789    1445 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:36:44.174415    1445 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 16:36:47.998797    1445 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:36:47.998975    1445 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:36:48.552425    1445 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 16:36:48.552654    1445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/download-only-138000/config.json ...
	I0415 16:36:48.552677    1445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/download-only-138000/config.json: {Name:mk2f343fe7dafe4ee8d57497667075a21a22d459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 16:36:48.552972    1445 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 16:36:48.553254    1445 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18647-976/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-138000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-138000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-138000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (10.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-749000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-749000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker : (10.495819807s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (10.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-749000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-749000: exit status 85 (319.483936ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-138000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:36 PDT |                     |
	|         | -p download-only-138000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT | 15 Apr 24 16:37 PDT |
	| delete  | -p download-only-138000        | download-only-138000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT | 15 Apr 24 16:37 PDT |
	| start   | -o=json --download-only        | download-only-749000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT |                     |
	|         | -p download-only-749000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 16:37:10
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 16:37:10.249131    1518 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:37:10.249423    1518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:37:10.249429    1518 out.go:304] Setting ErrFile to fd 2...
	I0415 16:37:10.249433    1518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:37:10.249613    1518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 16:37:10.251719    1518 out.go:298] Setting JSON to true
	I0415 16:37:10.276300    1518 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":401,"bootTime":1713223829,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 16:37:10.276439    1518 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 16:37:10.298014    1518 out.go:97] [download-only-749000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 16:37:10.319701    1518 out.go:169] MINIKUBE_LOCATION=18647
	I0415 16:37:10.298226    1518 notify.go:220] Checking for updates...
	I0415 16:37:10.362757    1518 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 16:37:10.383830    1518 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 16:37:10.404964    1518 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 16:37:10.426907    1518 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	W0415 16:37:10.470943    1518 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 16:37:10.471397    1518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 16:37:10.528715    1518 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 16:37:10.528877    1518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:37:10.651027    1518 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:90 SystemTime:2024-04-15 23:37:10.63690614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:37:10.672862    1518 out.go:97] Using the docker driver based on user configuration
	I0415 16:37:10.672913    1518 start.go:297] selected driver: docker
	I0415 16:37:10.672938    1518 start.go:901] validating driver "docker" against <nil>
	I0415 16:37:10.673096    1518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:37:10.794487    1518 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:90 SystemTime:2024-04-15 23:37:10.779636851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:37:10.794687    1518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 16:37:10.797715    1518 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0415 16:37:10.797869    1518 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 16:37:10.818686    1518 out.go:169] Using Docker Desktop driver with root privileges
	I0415 16:37:10.839730    1518 cni.go:84] Creating CNI manager for ""
	I0415 16:37:10.839763    1518 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 16:37:10.839776    1518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 16:37:10.839881    1518 start.go:340] cluster config:
	{Name:download-only-749000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 16:37:10.860749    1518 out.go:97] Starting "download-only-749000" primary control-plane node in "download-only-749000" cluster
	I0415 16:37:10.860835    1518 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 16:37:10.882496    1518 out.go:97] Pulling base image v0.0.43-1713215244-18647 ...
	I0415 16:37:10.882520    1518 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 16:37:10.882600    1518 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 16:37:10.932302    1518 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 16:37:10.932476    1518 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 16:37:10.932494    1518 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory, skipping pull
	I0415 16:37:10.932499    1518 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in cache, skipping pull
	I0415 16:37:10.932507    1518 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af as a tarball
	I0415 16:37:10.934992    1518 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 16:37:10.935003    1518 cache.go:56] Caching tarball of preloaded images
	I0415 16:37:10.935191    1518 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 16:37:10.956948    1518 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 16:37:10.956972    1518 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:37:11.037969    1518 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 16:37:15.142814    1518 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:37:15.143036    1518 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:37:15.652144    1518 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 16:37:15.652418    1518 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/download-only-749000/config.json ...
	I0415 16:37:15.652447    1518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/download-only-749000/config.json: {Name:mk67d48618a5a76e2606259d084b0ffd734253c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 16:37:15.652821    1518 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 16:37:15.653146    1518 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18647-976/.minikube/cache/darwin/amd64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-749000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-749000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-749000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (12.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-810000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-810000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker : (12.764917047s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (12.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-810000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-810000: exit status 85 (298.12756ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-138000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:36 PDT |                     |
	|         | -p download-only-138000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT | 15 Apr 24 16:37 PDT |
	| delete  | -p download-only-138000           | download-only-138000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT | 15 Apr 24 16:37 PDT |
	| start   | -o=json --download-only           | download-only-749000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT |                     |
	|         | -p download-only-749000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT | 15 Apr 24 16:37 PDT |
	| delete  | -p download-only-749000           | download-only-749000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT | 15 Apr 24 16:37 PDT |
	| start   | -o=json --download-only           | download-only-810000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 16:37 PDT |                     |
	|         | -p download-only-810000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 16:37:22
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 16:37:22.110030    1585 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:37:22.110304    1585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:37:22.110309    1585 out.go:304] Setting ErrFile to fd 2...
	I0415 16:37:22.110313    1585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:37:22.110504    1585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 16:37:22.111911    1585 out.go:298] Setting JSON to true
	I0415 16:37:22.136987    1585 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":413,"bootTime":1713223829,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 16:37:22.137074    1585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 16:37:22.159017    1585 out.go:97] [download-only-810000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 16:37:22.180068    1585 out.go:169] MINIKUBE_LOCATION=18647
	I0415 16:37:22.159125    1585 notify.go:220] Checking for updates...
	I0415 16:37:22.222941    1585 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 16:37:22.243842    1585 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 16:37:22.264964    1585 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 16:37:22.285915    1585 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	W0415 16:37:22.327965    1585 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 16:37:22.328257    1585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 16:37:22.383329    1585 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 16:37:22.383475    1585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:37:22.501537    1585 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:90 SystemTime:2024-04-15 23:37:22.490993665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:37:22.522977    1585 out.go:97] Using the docker driver based on user configuration
	I0415 16:37:22.523000    1585 start.go:297] selected driver: docker
	I0415 16:37:22.523007    1585 start.go:901] validating driver "docker" against <nil>
	I0415 16:37:22.523126    1585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:37:22.642296    1585 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:90 SystemTime:2024-04-15 23:37:22.631839487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:37:22.642480    1585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 16:37:22.645376    1585 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0415 16:37:22.645527    1585 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 16:37:22.666808    1585 out.go:169] Using Docker Desktop driver with root privileges
	I0415 16:37:22.688254    1585 cni.go:84] Creating CNI manager for ""
	I0415 16:37:22.688290    1585 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 16:37:22.688317    1585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 16:37:22.688426    1585 start.go:340] cluster config:
	{Name:download-only-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 16:37:22.710173    1585 out.go:97] Starting "download-only-810000" primary control-plane node in "download-only-810000" cluster
	I0415 16:37:22.710206    1585 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 16:37:22.730850    1585 out.go:97] Pulling base image v0.0.43-1713215244-18647 ...
	I0415 16:37:22.730882    1585 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 16:37:22.730921    1585 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 16:37:22.778782    1585 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 16:37:22.778959    1585 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 16:37:22.778980    1585 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory, skipping pull
	I0415 16:37:22.778986    1585 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in cache, skipping pull
	I0415 16:37:22.778994    1585 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af as a tarball
	I0415 16:37:22.784051    1585 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 16:37:22.784074    1585 cache.go:56] Caching tarball of preloaded images
	I0415 16:37:22.784234    1585 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 16:37:22.805965    1585 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 16:37:22.805981    1585 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:37:22.883998    1585 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:9834337eee074d8b5e25932a2917a549 -> /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 16:37:27.462975    1585 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:37:27.463180    1585 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18647-976/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 16:37:27.960034    1585 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 16:37:27.960279    1585 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/download-only-810000/config.json ...
	I0415 16:37:27.960304    1585 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/download-only-810000/config.json: {Name:mkaae880c8028452a30bc7563e4136a837a9f8a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 16:37:27.960578    1585 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 16:37:27.960837    1585 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18647-976/.minikube/cache/darwin/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-810000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-810000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-810000
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-349000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-349000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-349000
--- PASS: TestDownloadOnlyKic (1.93s)

                                                
                                    
x
+
TestBinaryMirror (1.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-376000 --alsologtostderr --binary-mirror http://127.0.0.1:49349 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-376000 --alsologtostderr --binary-mirror http://127.0.0.1:49349 --driver=docker : (1.064257144s)
helpers_test.go:175: Cleaning up "binary-mirror-376000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-376000
--- PASS: TestBinaryMirror (1.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-306000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-306000: exit status 85 (232.033764ms)

                                                
                                                
-- stdout --
	* Profile "addons-306000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-306000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-306000: exit status 85 (211.090714ms)

                                                
                                                
-- stdout --
	* Profile "addons-306000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (143.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-306000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-306000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.521898783s)
--- PASS: TestAddons/Setup (143.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ccln4" [4bef9bb4-ae62-421d-8b6b-bcbc097c04fd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00545507s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-306000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-306000: (5.955343919s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.891461ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-rtn2z" [73721090-8f2d-4f3f-813d-0768533517d8] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004963969s
addons_test.go:415: (dbg) Run:  kubectl --context addons-306000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.274464ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-s4dph" [8c9ab275-90cf-4d5c-989f-b8f42591e907] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005165075s
addons_test.go:473: (dbg) Run:  kubectl --context addons-306000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-306000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.397454702s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 21.761538ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-306000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-306000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [99f7e3d9-b05d-4b18-8d66-d5f2b04c2ad3] Pending
helpers_test.go:344: "task-pv-pod" [99f7e3d9-b05d-4b18-8d66-d5f2b04c2ad3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [99f7e3d9-b05d-4b18-8d66-d5f2b04c2ad3] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00602341s
addons_test.go:584: (dbg) Run:  kubectl --context addons-306000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-306000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-306000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-306000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-306000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-306000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-306000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [73dfe912-291a-49ab-b2e0-8cf5f210ce43] Pending
helpers_test.go:344: "task-pv-pod-restore" [73dfe912-291a-49ab-b2e0-8cf5f210ce43] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [73dfe912-291a-49ab-b2e0-8cf5f210ce43] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004242374s
addons_test.go:626: (dbg) Run:  kubectl --context addons-306000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-306000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-306000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-306000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.765927366s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-306000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-306000 --alsologtostderr -v=1: (1.225822753s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-jcwrb" [e84ed5fa-dfd7-48a3-a0fc-77a3373cc6ce] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-jcwrb" [e84ed5fa-dfd7-48a3-a0fc-77a3373cc6ce] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004675281s
--- PASS: TestAddons/parallel/Headlamp (13.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-9p826" [90586401-4485-49af-aaf8-bb0ca74a0cf8] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0042591s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-306000
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-306000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-306000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4df38d44-faa1-4efd-8809-4ceafd7a3d33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4df38d44-faa1-4efd-8809-4ceafd7a3d33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4df38d44-faa1-4efd-8809-4ceafd7a3d33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00561196s
addons_test.go:891: (dbg) Run:  kubectl --context addons-306000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 ssh "cat /opt/local-path-provisioner/pvc-ca2eaa8d-9efe-465a-8186-d2cc3df57cf5_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-306000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-306000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-306000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.091350081s)
--- PASS: TestAddons/parallel/LocalPath (54.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-l9fkt" [fc7fce14-cd7e-4f94-96eb-ef4de03a50c7] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005291435s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-306000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-psjd8" [56b15ee6-605a-4c04-ac6b-1a4501a879f8] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004248845s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-306000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-306000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-306000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-306000: (11.050345526s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-306000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-306000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-306000
--- PASS: TestAddons/StoppedEnableDisable (11.78s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.25s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.25s)

                                                
                                    
x
+
TestErrorSpam/setup (20.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-566000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-566000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 --driver=docker : (20.818816634s)
--- PASS: TestErrorSpam/setup (20.82s)

                                                
                                    
x
+
TestErrorSpam/start (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 start --dry-run
--- PASS: TestErrorSpam/start (2.08s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (2.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 stop: (2.251698921s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-566000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-566000 stop
--- PASS: TestErrorSpam/stop (2.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18647-976/.minikube/files/etc/test/nested/copy/1443/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-829000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-829000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (38.631053878s)
--- PASS: TestFunctional/serial/StartWithProxy (38.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-829000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-829000 --alsologtostderr -v=8: (36.113474964s)
functional_test.go:659: soft start took 36.113999291s for "functional-829000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-829000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 cache add registry.k8s.io/pause:3.1: (1.290098127s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 cache add registry.k8s.io/pause:3.3: (1.299529145s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 cache add registry.k8s.io/pause:latest: (1.127828014s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1319053970/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cache add minikube-local-cache-test:functional-829000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 cache add minikube-local-cache-test:functional-829000: (1.062385639s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cache delete minikube-local-cache-test:functional-829000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-829000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (398.200832ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 kubectl -- --context functional-829000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 kubectl -- --context functional-829000 get pods: (1.029916697s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-829000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-829000 get pods: (1.380158024s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.38s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-829000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-829000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.28021362s)
functional_test.go:757: restart took 42.280361425s for "functional-829000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-829000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 logs: (3.210397524s)
--- PASS: TestFunctional/serial/LogsCmd (3.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3552803235/001/logs.txt
E0415 16:45:04.681395    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:04.749995    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:04.760166    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:04.780315    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:04.822573    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:04.903210    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3552803235/001/logs.txt: (3.060460534s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.06s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-829000 apply -f testdata/invalidsvc.yaml
E0415 16:45:05.064143    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:05.386310    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:06.026571    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:45:07.306877    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-829000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-829000: exit status 115 (558.774083ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32106 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-829000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 config get cpus: exit status 14 (72.140778ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 config unset cpus
E0415 16:45:09.868113    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 config get cpus: exit status 14 (69.588727ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-829000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-829000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3978: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (712.255407ms)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 16:46:29.632591    3916 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:46:29.632859    3916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:46:29.632864    3916 out.go:304] Setting ErrFile to fd 2...
	I0415 16:46:29.632868    3916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:46:29.633051    3916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 16:46:29.634426    3916 out.go:298] Setting JSON to false
	I0415 16:46:29.657418    3916 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":960,"bootTime":1713223829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 16:46:29.657515    3916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 16:46:29.679077    3916 out.go:177] * [functional-829000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 16:46:29.722036    3916 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 16:46:29.722052    3916 notify.go:220] Checking for updates...
	I0415 16:46:29.764134    3916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 16:46:29.785060    3916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 16:46:29.806124    3916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 16:46:29.827265    3916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 16:46:29.885172    3916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 16:46:29.906865    3916 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 16:46:29.907640    3916 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 16:46:29.962794    3916 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 16:46:29.962964    3916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:46:30.073892    3916 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:false NGoroutines:103 SystemTime:2024-04-15 23:46:30.062966441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:46:30.132281    3916 out.go:177] * Using the docker driver based on existing profile
	I0415 16:46:30.154222    3916 start.go:297] selected driver: docker
	I0415 16:46:30.154248    3916 start.go:901] validating driver "docker" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 16:46:30.154375    3916 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 16:46:30.179193    3916 out.go:177] 
	W0415 16:46:30.200362    3916 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 16:46:30.221172    3916 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-829000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (620.843329ms)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.33.0-beta.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18647
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 16:46:29.005110    3898 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:46:29.005276    3898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:46:29.005282    3898 out.go:304] Setting ErrFile to fd 2...
	I0415 16:46:29.005285    3898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:46:29.005482    3898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 16:46:29.007585    3898 out.go:298] Setting JSON to false
	I0415 16:46:29.032142    3898 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":960,"bootTime":1713223829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 16:46:29.032249    3898 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 16:46:29.054385    3898 out.go:177] * [functional-829000] minikube v1.33.0-beta.0 sur Darwin 14.4.1
	I0415 16:46:29.075194    3898 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 16:46:29.096345    3898 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
	I0415 16:46:29.075208    3898 notify.go:220] Checking for updates...
	I0415 16:46:29.140300    3898 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 16:46:29.161210    3898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 16:46:29.182206    3898 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube
	I0415 16:46:29.203201    3898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 16:46:29.224370    3898 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 16:46:29.224903    3898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 16:46:29.280984    3898 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 16:46:29.281136    3898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 16:46:29.395438    3898 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:false NGoroutines:103 SystemTime:2024-04-15 23:46:29.385364186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 16:46:29.438139    3898 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0415 16:46:29.458916    3898 start.go:297] selected driver: docker
	I0415 16:46:29.458929    3898 start.go:901] validating driver "docker" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 16:46:29.458994    3898 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 16:46:29.483262    3898 out.go:177] 
	W0415 16:46:29.504240    3898 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 16:46:29.525092    3898 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [916be0d3-2521-4964-97e1-7a22054eae66] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005834419s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-829000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-829000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-829000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-829000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1afacf67-67d4-4147-b089-887482754e90] Pending
helpers_test.go:344: "sp-pod" [1afacf67-67d4-4147-b089-887482754e90] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1afacf67-67d4-4147-b089-887482754e90] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004456654s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-829000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-829000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-829000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [87ee5fe0-5b09-46e8-94a1-434945edb81f] Pending
helpers_test.go:344: "sp-pod" [87ee5fe0-5b09-46e8-94a1-434945edb81f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [87ee5fe0-5b09-46e8-94a1-434945edb81f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00364112s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-829000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh -n functional-829000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cp functional-829000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3602972255/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh -n functional-829000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh -n functional-829000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-829000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
E0415 16:45:14.988755    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-5xnjz" [c5def349-2876-4dec-ba2b-5b5e60094639] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5xnjz" [c5def349-2876-4dec-ba2b-5b5e60094639] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004419387s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;": exit status 1 (128.833055ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;": exit status 1 (158.349615ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;": exit status 1 (138.806499ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0415 16:45:45.709060    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-829000 exec mysql-859648c796-5xnjz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1443/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /etc/test/nested/copy/1443/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1443.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/1443.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1443.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /usr/share/ca-certificates/1443.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/14432.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /usr/share/ca-certificates/14432.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-829000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh "sudo systemctl is-active crio": exit status 1 (437.508991ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-829000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-829000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-829000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-829000 image ls --format short --alsologtostderr:
I0415 16:46:43.025372    4046 out.go:291] Setting OutFile to fd 1 ...
I0415 16:46:43.025809    4046 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:43.025815    4046 out.go:304] Setting ErrFile to fd 2...
I0415 16:46:43.025819    4046 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:43.025998    4046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
I0415 16:46:43.026592    4046 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:43.026690    4046 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:43.027076    4046 cli_runner.go:164] Run: docker container inspect functional-829000 --format={{.State.Status}}
I0415 16:46:43.081066    4046 ssh_runner.go:195] Run: systemctl --version
I0415 16:46:43.081141    4046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-829000
I0415 16:46:43.139029    4046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50061 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/functional-829000/id_rsa Username:docker}
I0415 16:46:43.223774    4046 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-829000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-829000 | f2a2ce2ef08ab | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| docker.io/library/nginx                     | latest            | c613f16b66424 | 187MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-829000 | c1caa22a4ee46 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-829000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-829000 image ls --format table --alsologtostderr:
I0415 16:46:46.891965    4116 out.go:291] Setting OutFile to fd 1 ...
I0415 16:46:46.892598    4116 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:46.892625    4116 out.go:304] Setting ErrFile to fd 2...
I0415 16:46:46.892638    4116 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:46.893209    4116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
I0415 16:46:46.893860    4116 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:46.893950    4116 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:46.894341    4116 cli_runner.go:164] Run: docker container inspect functional-829000 --format={{.State.Status}}
I0415 16:46:46.949547    4116 ssh_runner.go:195] Run: systemctl --version
I0415 16:46:46.949617    4116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-829000
I0415 16:46:47.004707    4116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50061 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/functional-829000/id_rsa Username:docker}
I0415 16:46:47.093332    4116 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-829000 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"f2a2ce2ef08ab3c379817659c7840a4b05664a9d7fa08a0c546550d58b6
2c7db","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-829000"],"size":"1240000"},{"id":"c1caa22a4ee46c5bb3ec12bd2b23e7241a4cdc19972902996e70b85e084967fb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-829000"],"size":"30"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":[],"repoTags":["docker.i
o/library/nginx:latest"],"size":"187000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-829000"],"size":"32900000"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"
59800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-829000 image ls --format json --alsologtostderr:
I0415 16:46:46.583383    4107 out.go:291] Setting OutFile to fd 1 ...
I0415 16:46:46.583638    4107 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:46.583643    4107 out.go:304] Setting ErrFile to fd 2...
I0415 16:46:46.583647    4107 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:46.583826    4107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
I0415 16:46:46.584403    4107 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:46.584493    4107 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:46.584978    4107 cli_runner.go:164] Run: docker container inspect functional-829000 --format={{.State.Status}}
I0415 16:46:46.636687    4107 ssh_runner.go:195] Run: systemctl --version
I0415 16:46:46.636759    4107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-829000
I0415 16:46:46.692417    4107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50061 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/functional-829000/id_rsa Username:docker}
I0415 16:46:46.778704    4107 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-829000 image ls --format yaml --alsologtostderr:
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: c1caa22a4ee46c5bb3ec12bd2b23e7241a4cdc19972902996e70b85e084967fb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-829000
size: "30"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-829000
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-829000 image ls --format yaml --alsologtostderr:
I0415 16:46:43.350023    4058 out.go:291] Setting OutFile to fd 1 ...
I0415 16:46:43.350200    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:43.350206    4058 out.go:304] Setting ErrFile to fd 2...
I0415 16:46:43.350210    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:43.350406    4058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
I0415 16:46:43.351055    4058 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:43.351214    4058 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:43.351901    4058 cli_runner.go:164] Run: docker container inspect functional-829000 --format={{.State.Status}}
I0415 16:46:43.415882    4058 ssh_runner.go:195] Run: systemctl --version
I0415 16:46:43.416011    4058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-829000
I0415 16:46:43.478742    4058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50061 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/functional-829000/id_rsa Username:docker}
I0415 16:46:43.568600    4058 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh pgrep buildkitd: exit status 1 (408.694072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image build -t localhost/my-image:functional-829000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image build -t localhost/my-image:functional-829000 testdata/build --alsologtostderr: (2.134541137s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-829000 image build -t localhost/my-image:functional-829000 testdata/build --alsologtostderr:
I0415 16:46:44.112096    4081 out.go:291] Setting OutFile to fd 1 ...
I0415 16:46:44.112365    4081 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:44.112373    4081 out.go:304] Setting ErrFile to fd 2...
I0415 16:46:44.112377    4081 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 16:46:44.112584    4081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
I0415 16:46:44.113323    4081 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:44.116501    4081 config.go:182] Loaded profile config "functional-829000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 16:46:44.116982    4081 cli_runner.go:164] Run: docker container inspect functional-829000 --format={{.State.Status}}
I0415 16:46:44.172926    4081 ssh_runner.go:195] Run: systemctl --version
I0415 16:46:44.172998    4081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-829000
I0415 16:46:44.233638    4081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50061 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/functional-829000/id_rsa Username:docker}
I0415 16:46:44.319203    4081 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.585686942.tar
I0415 16:46:44.319340    4081 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 16:46:44.329409    4081 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.585686942.tar
I0415 16:46:44.333450    4081 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.585686942.tar: stat -c "%s %y" /var/lib/minikube/build/build.585686942.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.585686942.tar': No such file or directory
I0415 16:46:44.333482    4081 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.585686942.tar --> /var/lib/minikube/build/build.585686942.tar (3072 bytes)
I0415 16:46:44.357206    4081 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.585686942
I0415 16:46:44.366962    4081 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.585686942 -xf /var/lib/minikube/build/build.585686942.tar
I0415 16:46:44.376316    4081 docker.go:360] Building image: /var/lib/minikube/build/build.585686942
I0415 16:46:44.376404    4081 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-829000 /var/lib/minikube/build/build.585686942
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f2a2ce2ef08ab3c379817659c7840a4b05664a9d7fa08a0c546550d58b62c7db done
#8 naming to localhost/my-image:functional-829000 done
#8 DONE 0.0s
I0415 16:46:46.122765    4081 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-829000 /var/lib/minikube/build/build.585686942: (1.746369019s)
I0415 16:46:46.123061    4081 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.585686942
I0415 16:46:46.132306    4081 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.585686942.tar
I0415 16:46:46.141604    4081 build_images.go:217] Built localhost/my-image:functional-829000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.585686942.tar
I0415 16:46:46.141629    4081 build_images.go:133] succeeded building to: functional-829000
I0415 16:46:46.141634    4081 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.91935062s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-829000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-829000 docker-env) && out/minikube-darwin-amd64 status -p functional-829000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-829000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (3.673687395s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (2.271953379s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.980375052s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-829000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (4.531534929s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls
E0415 16:45:25.228922    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image save gcr.io/google-containers/addon-resizer:functional-829000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image save gcr.io/google-containers/addon-resizer:functional-829000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.645996774s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image rm gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.006045852s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-829000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 image save --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 image save --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (1.476534381s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-829000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-829000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-829000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-829000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-829000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3433: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-829000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-829000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0ebb9c27-29fb-4e2c-be90-e2c5fb185e9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0ebb9c27-29fb-4e2c-be90-e2c5fb185e9c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003221652s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-829000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-829000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3463: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-829000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-829000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kb4f8" [68790af9-7039-4443-b617-2d10d3fad1fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kb4f8" [68790af9-7039-4443-b617-2d10d3fad1fa] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005040951s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 service list
functional_test.go:1455: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 service list: (1.016779042s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-darwin-amd64 -p functional-829000 service list -o json: (1.016654759s)
functional_test.go:1490: Took "1.016737814s" to run "out/minikube-darwin-amd64 -p functional-829000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 service --namespace=default --https --url hello-node: signal: killed (15.002626421s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50379

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50379
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "450.794522ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "83.324241ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "439.214964ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "86.675114ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2125500398/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713224774887398000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2125500398/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713224774887398000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2125500398/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713224774887398000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2125500398/001/test-1713224774887398000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.91155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 23:46 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 23:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 23:46 test-1713224774887398000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh cat /mount-9p/test-1713224774887398000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-829000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [49f36aff-b123-4dd9-ab82-f3db1fbfd1d6] Pending
helpers_test.go:344: "busybox-mount" [49f36aff-b123-4dd9-ab82-f3db1fbfd1d6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [49f36aff-b123-4dd9-ab82-f3db1fbfd1d6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [49f36aff-b123-4dd9-ab82-f3db1fbfd1d6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003067378s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-829000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2125500398/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 service hello-node --url --format={{.IP}}: signal: killed (15.003233255s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1217226555/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (404.092056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1217226555/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh "sudo umount -f /mount-9p": exit status 1 (353.757926ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-829000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1217226555/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup554776014/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup554776014/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup554776014/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 1 (582.962136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T" /mount1
E0415 16:46:26.668781    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-829000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup554776014/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup554776014/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-829000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup554776014/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-829000 service hello-node --url
2024/04/15 16:46:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-829000 service hello-node --url: signal: killed (15.004338547s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50495

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50495
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-829000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-829000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-829000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-911000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0415 16:47:48.588504    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-911000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m43.010000288s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: (1.101057337s)
--- PASS: TestMultiControlPlane/serial/StartCluster (104.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-911000 -- rollout status deployment/busybox: (2.757267892s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-dqnc7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-sqlc4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-wjlb4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-dqnc7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-sqlc4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-wjlb4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-dqnc7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-sqlc4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-wjlb4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-dqnc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-dqnc7 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-sqlc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-sqlc4 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-wjlb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-911000 -- exec busybox-7fdf7869d9-wjlb4 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-911000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-911000 -v=7 --alsologtostderr: (18.172062316s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: (1.346802673s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-911000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.09274984s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (24.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status --output json -v=7 --alsologtostderr: (1.345499664s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp testdata/cp-test.txt ha-911000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2539331588/001/cp-test_ha-911000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000:/home/docker/cp-test.txt ha-911000-m02:/home/docker/cp-test_ha-911000_ha-911000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test_ha-911000_ha-911000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000:/home/docker/cp-test.txt ha-911000-m03:/home/docker/cp-test_ha-911000_ha-911000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test_ha-911000_ha-911000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000:/home/docker/cp-test.txt ha-911000-m04:/home/docker/cp-test_ha-911000_ha-911000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test_ha-911000_ha-911000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp testdata/cp-test.txt ha-911000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2539331588/001/cp-test_ha-911000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m02:/home/docker/cp-test.txt ha-911000:/home/docker/cp-test_ha-911000-m02_ha-911000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test_ha-911000-m02_ha-911000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m02:/home/docker/cp-test.txt ha-911000-m03:/home/docker/cp-test_ha-911000-m02_ha-911000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test_ha-911000-m02_ha-911000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m02:/home/docker/cp-test.txt ha-911000-m04:/home/docker/cp-test_ha-911000-m02_ha-911000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test_ha-911000-m02_ha-911000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp testdata/cp-test.txt ha-911000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2539331588/001/cp-test_ha-911000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m03:/home/docker/cp-test.txt ha-911000:/home/docker/cp-test_ha-911000-m03_ha-911000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test_ha-911000-m03_ha-911000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m03:/home/docker/cp-test.txt ha-911000-m02:/home/docker/cp-test_ha-911000-m03_ha-911000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test_ha-911000-m03_ha-911000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m03:/home/docker/cp-test.txt ha-911000-m04:/home/docker/cp-test_ha-911000-m03_ha-911000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test_ha-911000-m03_ha-911000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp testdata/cp-test.txt ha-911000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2539331588/001/cp-test_ha-911000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m04:/home/docker/cp-test.txt ha-911000:/home/docker/cp-test_ha-911000-m04_ha-911000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000 "sudo cat /home/docker/cp-test_ha-911000-m04_ha-911000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m04:/home/docker/cp-test.txt ha-911000-m02:/home/docker/cp-test_ha-911000-m04_ha-911000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m02 "sudo cat /home/docker/cp-test_ha-911000-m04_ha-911000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 cp ha-911000-m04:/home/docker/cp-test.txt ha-911000-m03:/home/docker/cp-test_ha-911000-m04_ha-911000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 ssh -n ha-911000-m03 "sudo cat /home/docker/cp-test_ha-911000-m04_ha-911000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (24.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 node stop m02 -v=7 --alsologtostderr: (10.907938757s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: exit status 7 (1.021985097s)

                                                
                                                
-- stdout --
	ha-911000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-911000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-911000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 16:49:42.728530    5354 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:49:42.728926    5354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:49:42.728934    5354 out.go:304] Setting ErrFile to fd 2...
	I0415 16:49:42.728939    5354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:49:42.729136    5354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 16:49:42.729328    5354 out.go:298] Setting JSON to false
	I0415 16:49:42.729352    5354 mustload.go:65] Loading cluster: ha-911000
	I0415 16:49:42.729393    5354 notify.go:220] Checking for updates...
	I0415 16:49:42.729660    5354 config.go:182] Loaded profile config "ha-911000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 16:49:42.729677    5354 status.go:255] checking status of ha-911000 ...
	I0415 16:49:42.730081    5354 cli_runner.go:164] Run: docker container inspect ha-911000 --format={{.State.Status}}
	I0415 16:49:42.781646    5354 status.go:330] ha-911000 host status = "Running" (err=<nil>)
	I0415 16:49:42.781710    5354 host.go:66] Checking if "ha-911000" exists ...
	I0415 16:49:42.782039    5354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-911000
	I0415 16:49:42.832898    5354 host.go:66] Checking if "ha-911000" exists ...
	I0415 16:49:42.833219    5354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 16:49:42.833277    5354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-911000
	I0415 16:49:42.883692    5354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50525 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/ha-911000/id_rsa Username:docker}
	I0415 16:49:42.964504    5354 ssh_runner.go:195] Run: systemctl --version
	I0415 16:49:42.969155    5354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 16:49:42.979757    5354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-911000
	I0415 16:49:43.030602    5354 kubeconfig.go:125] found "ha-911000" server: "https://127.0.0.1:50524"
	I0415 16:49:43.030631    5354 api_server.go:166] Checking apiserver status ...
	I0415 16:49:43.030668    5354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 16:49:43.041409    5354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2178/cgroup
	W0415 16:49:43.050319    5354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 16:49:43.050444    5354 ssh_runner.go:195] Run: ls
	I0415 16:49:43.054680    5354 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50524/healthz ...
	I0415 16:49:43.059965    5354 api_server.go:279] https://127.0.0.1:50524/healthz returned 200:
	ok
	I0415 16:49:43.059978    5354 status.go:422] ha-911000 apiserver status = Running (err=<nil>)
	I0415 16:49:43.059990    5354 status.go:257] ha-911000 status: &{Name:ha-911000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 16:49:43.060009    5354 status.go:255] checking status of ha-911000-m02 ...
	I0415 16:49:43.060320    5354 cli_runner.go:164] Run: docker container inspect ha-911000-m02 --format={{.State.Status}}
	I0415 16:49:43.110462    5354 status.go:330] ha-911000-m02 host status = "Stopped" (err=<nil>)
	I0415 16:49:43.110486    5354 status.go:343] host is not running, skipping remaining checks
	I0415 16:49:43.110495    5354 status.go:257] ha-911000-m02 status: &{Name:ha-911000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 16:49:43.110508    5354 status.go:255] checking status of ha-911000-m03 ...
	I0415 16:49:43.110813    5354 cli_runner.go:164] Run: docker container inspect ha-911000-m03 --format={{.State.Status}}
	I0415 16:49:43.161800    5354 status.go:330] ha-911000-m03 host status = "Running" (err=<nil>)
	I0415 16:49:43.161827    5354 host.go:66] Checking if "ha-911000-m03" exists ...
	I0415 16:49:43.162109    5354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-911000-m03
	I0415 16:49:43.212048    5354 host.go:66] Checking if "ha-911000-m03" exists ...
	I0415 16:49:43.212297    5354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 16:49:43.212344    5354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-911000-m03
	I0415 16:49:43.262864    5354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50627 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/ha-911000-m03/id_rsa Username:docker}
	I0415 16:49:43.345665    5354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 16:49:43.356126    5354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-911000
	I0415 16:49:43.407022    5354 kubeconfig.go:125] found "ha-911000" server: "https://127.0.0.1:50524"
	I0415 16:49:43.407045    5354 api_server.go:166] Checking apiserver status ...
	I0415 16:49:43.407086    5354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 16:49:43.418343    5354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2074/cgroup
	W0415 16:49:43.427929    5354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2074/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 16:49:43.427982    5354 ssh_runner.go:195] Run: ls
	I0415 16:49:43.431859    5354 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50524/healthz ...
	I0415 16:49:43.435782    5354 api_server.go:279] https://127.0.0.1:50524/healthz returned 200:
	ok
	I0415 16:49:43.435799    5354 status.go:422] ha-911000-m03 apiserver status = Running (err=<nil>)
	I0415 16:49:43.435808    5354 status.go:257] ha-911000-m03 status: &{Name:ha-911000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 16:49:43.435818    5354 status.go:255] checking status of ha-911000-m04 ...
	I0415 16:49:43.436084    5354 cli_runner.go:164] Run: docker container inspect ha-911000-m04 --format={{.State.Status}}
	I0415 16:49:43.487036    5354 status.go:330] ha-911000-m04 host status = "Running" (err=<nil>)
	I0415 16:49:43.487062    5354 host.go:66] Checking if "ha-911000-m04" exists ...
	I0415 16:49:43.487343    5354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-911000-m04
	I0415 16:49:43.538884    5354 host.go:66] Checking if "ha-911000-m04" exists ...
	I0415 16:49:43.539153    5354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 16:49:43.539209    5354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-911000-m04
	I0415 16:49:43.590948    5354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50752 SSHKeyPath:/Users/jenkins/minikube-integration/18647-976/.minikube/machines/ha-911000-m04/id_rsa Username:docker}
	I0415 16:49:43.674684    5354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 16:49:43.685561    5354 status.go:257] ha-911000-m04 status: &{Name:ha-911000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 node start m02 -v=7 --alsologtostderr
E0415 16:50:04.675835    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:50:14.741129    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:14.747201    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:14.757304    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:14.777554    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:14.817789    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:14.898269    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:15.058589    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:15.378749    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:16.018961    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:17.299276    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 node start m02 -v=7 --alsologtostderr: (33.100048645s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: (1.457644787s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0415 16:50:19.860253    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.09254164s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (215.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-911000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-911000 -v=7 --alsologtostderr
E0415 16:50:24.980412    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:50:32.425837    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:50:35.220505    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-911000 -v=7 --alsologtostderr: (34.245629069s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-911000 --wait=true -v=7 --alsologtostderr
E0415 16:50:55.700342    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:51:36.660365    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:52:58.579154    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-911000 --wait=true -v=7 --alsologtostderr: (3m1.04575762s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-911000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (215.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 node delete m03 -v=7 --alsologtostderr: (10.850380983s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: (1.003478192s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 stop -v=7 --alsologtostderr: (32.824078066s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: exit status 7 (220.569268ms)

                                                
                                                
-- stdout --
	ha-911000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 16:54:41.361787    6020 out.go:291] Setting OutFile to fd 1 ...
	I0415 16:54:41.362190    6020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:54:41.362198    6020 out.go:304] Setting ErrFile to fd 2...
	I0415 16:54:41.362201    6020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 16:54:41.362390    6020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18647-976/.minikube/bin
	I0415 16:54:41.362563    6020 out.go:298] Setting JSON to false
	I0415 16:54:41.362604    6020 mustload.go:65] Loading cluster: ha-911000
	I0415 16:54:41.362627    6020 notify.go:220] Checking for updates...
	I0415 16:54:41.362932    6020 config.go:182] Loaded profile config "ha-911000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 16:54:41.362947    6020 status.go:255] checking status of ha-911000 ...
	I0415 16:54:41.363340    6020 cli_runner.go:164] Run: docker container inspect ha-911000 --format={{.State.Status}}
	I0415 16:54:41.415217    6020 status.go:330] ha-911000 host status = "Stopped" (err=<nil>)
	I0415 16:54:41.415252    6020 status.go:343] host is not running, skipping remaining checks
	I0415 16:54:41.415261    6020 status.go:257] ha-911000 status: &{Name:ha-911000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 16:54:41.415278    6020 status.go:255] checking status of ha-911000-m02 ...
	I0415 16:54:41.415541    6020 cli_runner.go:164] Run: docker container inspect ha-911000-m02 --format={{.State.Status}}
	I0415 16:54:41.466607    6020 status.go:330] ha-911000-m02 host status = "Stopped" (err=<nil>)
	I0415 16:54:41.466641    6020 status.go:343] host is not running, skipping remaining checks
	I0415 16:54:41.466664    6020 status.go:257] ha-911000-m02 status: &{Name:ha-911000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 16:54:41.466684    6020 status.go:255] checking status of ha-911000-m04 ...
	I0415 16:54:41.466970    6020 cli_runner.go:164] Run: docker container inspect ha-911000-m04 --format={{.State.Status}}
	I0415 16:54:41.517222    6020 status.go:330] ha-911000-m04 host status = "Stopped" (err=<nil>)
	I0415 16:54:41.517248    6020 status.go:343] host is not running, skipping remaining checks
	I0415 16:54:41.517258    6020 status.go:257] ha-911000-m04 status: &{Name:ha-911000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-911000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0415 16:55:04.671387    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
E0415 16:55:14.736446    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
E0415 16:55:42.417508    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-911000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m25.449501757s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: (1.020737425s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-911000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-911000 --control-plane -v=7 --alsologtostderr: (35.472331894s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-911000 status -v=7 --alsologtostderr: (1.337433219s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.109005234s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-422000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-422000 --driver=docker : (21.294726146s)
--- PASS: TestImageBuild/serial/Setup (21.29s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-422000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-422000: (1.747939251s)
--- PASS: TestImageBuild/serial/NormalBuild (1.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-422000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-422000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-422000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (34.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-775000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-775000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (34.505589883s)
--- PASS: TestJSONOutput/start/Command (34.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-775000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-775000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-775000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-775000 --output=json --user=testUser: (10.834658587s)
--- PASS: TestJSONOutput/stop/Command (10.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-117000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-117000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (388.90319ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"418395c1-362e-40dd-9f45-aaf5010e5115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-117000] minikube v1.33.0-beta.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef35b530-aa4f-40cd-8f66-551454a95ea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18647"}}
	{"specversion":"1.0","id":"f446d3e8-4b20-45db-ae77-3ebbdf5355df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig"}}
	{"specversion":"1.0","id":"b350ca96-9950-44a6-82c2-451cf90f7ec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ff81279f-99a1-4bf6-aa49-56bdfbbd2864","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"40b2f0f1-8795-460d-8da4-efcbb2c6a960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18647-976/.minikube"}}
	{"specversion":"1.0","id":"97ede1dc-4ef6-4fc0-b81f-5d5b3f52a075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eb84880b-36b3-42ad-9fc2-a356d349f80c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-117000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-117000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-840000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-840000 --network=: (22.152178173s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-840000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-840000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-840000: (2.365054394s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-492000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-492000 --network=bridge: (20.595702458s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-492000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-492000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-492000: (2.221697858s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.87s)

                                                
                                    
x
+
TestKicExistingNetwork (22.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-789000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-789000 --network=existing-network: (19.803307733s)
helpers_test.go:175: Cleaning up "existing-network-789000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-789000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-789000: (2.228571004s)
--- PASS: TestKicExistingNetwork (22.42s)

                                                
                                    
x
+
TestKicCustomSubnet (23.66s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-967000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-967000 --subnet=192.168.60.0/24: (21.229743549s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-967000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-967000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-967000: (2.380454218s)
--- PASS: TestKicCustomSubnet (23.66s)

                                                
                                    
x
+
TestKicStaticIP (23.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-452000 --static-ip=192.168.200.200
E0415 17:00:04.727766    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-452000 --static-ip=192.168.200.200: (21.149217942s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-452000 ip
helpers_test.go:175: Cleaning up "static-ip-452000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-452000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-452000: (2.398791121s)
--- PASS: TestKicStaticIP (23.81s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (48.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-343000 --driver=docker 
E0415 17:00:14.792190    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/functional-829000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-343000 --driver=docker : (20.900973193s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-345000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-345000 --driver=docker : (21.135838695s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-343000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-345000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-345000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-345000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-345000: (2.44908334s)
helpers_test.go:175: Cleaning up "first-343000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-343000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-343000: (2.418414771s)
--- PASS: TestMinikubeProfile (48.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-991000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-991000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.31609965s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-991000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-004000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-004000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.308920357s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.05s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-991000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-991000 --alsologtostderr -v=5: (2.050837614s)
--- PASS: TestMountStart/serial/DeleteFirst (2.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-004000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-004000: (1.540514391s)
--- PASS: TestMountStart/serial/Stop (1.54s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-004000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-004000: (8.049271076s)
E0415 17:01:27.836735    1443 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18647-976/.minikube/profiles/addons-306000/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (9.05s)

                                                
                                    
x
+
TestPreload (98.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m4.451818519s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-972000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-972000 image pull gcr.io/k8s-minikube/busybox: (1.374199181s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-972000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-972000: (10.823117793s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-972000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-972000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (18.913762242s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-972000 image list
helpers_test.go:175: Cleaning up "test-preload-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-972000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-972000: (2.465465687s)
--- PASS: TestPreload (98.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.16s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin
- MINIKUBE_LOCATION=18647
- KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1449880174/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1449880174/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1449880174/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1449880174/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.16s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin
- MINIKUBE_LOCATION=18647
- KUBECONFIG=/Users/jenkins/minikube-integration/18647-976/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1234420894/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1234420894/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1234420894/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1234420894/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.74s)

                                                
                                    

Test skip (19/216)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 20.057578ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bwf95" [d6a9284a-52cd-4bdb-8e74-9784d35e0f2e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005904148s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ws6nm" [85238429-e824-457f-a316-d9c746693db6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006973847s
addons_test.go:340: (dbg) Run:  kubectl --context addons-306000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-306000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-306000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.768838839s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-306000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-306000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-306000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c402f236-7ff7-46dc-8dc1-00bbb8bf3b16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c402f236-7ff7-46dc-8dc1-00bbb8bf3b16] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006558705s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-306000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.80s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-829000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-829000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-98bfr" [ebd8cb55-a8c2-44a2-846e-0a42ec485c2e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-98bfr" [ebd8cb55-a8c2-44a2-846e-0a42ec485c2e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003767241s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard