Test Report: Docker_macOS 18756

                    
                      159c0885aec790b0bc18754712c4d2a4038767fb:2024-04-29:34251
                    
                

Test fail (22/201)

x
+
TestOffline (758.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-733000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-733000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m37.616172633s)

                                                
                                                
-- stdout --
	* [offline-docker-733000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-733000" primary control-plane node in "offline-docker-733000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-733000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:34:43.590436   16500 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:34:43.590626   16500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:34:43.590631   16500 out.go:304] Setting ErrFile to fd 2...
	I0429 05:34:43.590635   16500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:34:43.590825   16500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:34:43.592515   16500 out.go:298] Setting JSON to false
	I0429 05:34:43.615626   16500 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7453,"bootTime":1714386630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 05:34:43.615742   16500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:34:43.637877   16500 out.go:177] * [offline-docker-733000] minikube v1.33.0 on Darwin 14.4.1
	I0429 05:34:43.679631   16500 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 05:34:43.679652   16500 notify.go:220] Checking for updates...
	I0429 05:34:43.700425   16500 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 05:34:43.721564   16500 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 05:34:43.742568   16500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:34:43.784530   16500 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 05:34:43.805504   16500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:34:43.826661   16500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:34:43.880678   16500 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 05:34:43.880851   16500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:34:44.064723   16500 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-29 12:34:44.022670615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:34:44.107420   16500 out.go:177] * Using the docker driver based on user configuration
	I0429 05:34:44.128615   16500 start.go:297] selected driver: docker
	I0429 05:34:44.128647   16500 start.go:901] validating driver "docker" against <nil>
	I0429 05:34:44.128662   16500 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:34:44.132757   16500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:34:44.241109   16500 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-29 12:34:44.230878491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:34:44.241274   16500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:34:44.241460   16500 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:34:44.262665   16500 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 05:34:44.283780   16500 cni.go:84] Creating CNI manager for ""
	I0429 05:34:44.283824   16500 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:34:44.283840   16500 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:34:44.283970   16500 start.go:340] cluster config:
	{Name:offline-docker-733000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-733000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:34:44.305784   16500 out.go:177] * Starting "offline-docker-733000" primary control-plane node in "offline-docker-733000" cluster
	I0429 05:34:44.347667   16500 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 05:34:44.389578   16500 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 05:34:44.431598   16500 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:34:44.431636   16500 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 05:34:44.431645   16500 cache.go:56] Caching tarball of preloaded images
	I0429 05:34:44.431649   16500 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 05:34:44.431764   16500 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 05:34:44.431774   16500 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:34:44.432660   16500 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/offline-docker-733000/config.json ...
	I0429 05:34:44.432720   16500 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/offline-docker-733000/config.json: {Name:mkcd3f667e17d434e30036f83975b52233dd7a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:34:44.557038   16500 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 05:34:44.557082   16500 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 05:34:44.557110   16500 cache.go:194] Successfully downloaded all kic artifacts
	I0429 05:34:44.557266   16500 start.go:360] acquireMachinesLock for offline-docker-733000: {Name:mk612a9e507635e36280d2546b44301efe0fa47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:34:44.557468   16500 start.go:364] duration metric: took 186.668µs to acquireMachinesLock for "offline-docker-733000"
	I0429 05:34:44.557503   16500 start.go:93] Provisioning new machine with config: &{Name:offline-docker-733000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-733000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:34:44.557938   16500 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:34:44.600381   16500 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:34:44.600594   16500 start.go:159] libmachine.API.Create for "offline-docker-733000" (driver="docker")
	I0429 05:34:44.600621   16500 client.go:168] LocalClient.Create starting
	I0429 05:34:44.600743   16500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:34:44.600796   16500 main.go:141] libmachine: Decoding PEM data...
	I0429 05:34:44.600813   16500 main.go:141] libmachine: Parsing certificate...
	I0429 05:34:44.600887   16500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:34:44.600924   16500 main.go:141] libmachine: Decoding PEM data...
	I0429 05:34:44.600931   16500 main.go:141] libmachine: Parsing certificate...
	I0429 05:34:44.601414   16500 cli_runner.go:164] Run: docker network inspect offline-docker-733000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:34:44.708688   16500 cli_runner.go:211] docker network inspect offline-docker-733000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:34:44.708790   16500 network_create.go:281] running [docker network inspect offline-docker-733000] to gather additional debugging logs...
	I0429 05:34:44.708808   16500 cli_runner.go:164] Run: docker network inspect offline-docker-733000
	W0429 05:34:44.757333   16500 cli_runner.go:211] docker network inspect offline-docker-733000 returned with exit code 1
	I0429 05:34:44.757363   16500 network_create.go:284] error running [docker network inspect offline-docker-733000]: docker network inspect offline-docker-733000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-733000 not found
	I0429 05:34:44.757384   16500 network_create.go:286] output of [docker network inspect offline-docker-733000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-733000 not found
	
	** /stderr **
	I0429 05:34:44.757529   16500 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:34:44.808058   16500 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:34:44.809671   16500 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:34:44.810031   16500 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002262ab0}
	I0429 05:34:44.810049   16500 network_create.go:124] attempt to create docker network offline-docker-733000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 05:34:44.810126   16500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-733000 offline-docker-733000
	W0429 05:34:44.859694   16500 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-733000 offline-docker-733000 returned with exit code 1
	W0429 05:34:44.859734   16500 network_create.go:149] failed to create docker network offline-docker-733000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-733000 offline-docker-733000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 05:34:44.859753   16500 network_create.go:116] failed to create docker network offline-docker-733000 192.168.67.0/24, will retry: subnet is taken
	I0429 05:34:44.861175   16500 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:34:44.861539   16500 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002263b70}
	I0429 05:34:44.861550   16500 network_create.go:124] attempt to create docker network offline-docker-733000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 05:34:44.861612   16500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-733000 offline-docker-733000
	I0429 05:34:44.948369   16500 network_create.go:108] docker network offline-docker-733000 192.168.76.0/24 created
	I0429 05:34:44.948420   16500 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-733000" container
	I0429 05:34:44.948534   16500 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:34:44.999437   16500 cli_runner.go:164] Run: docker volume create offline-docker-733000 --label name.minikube.sigs.k8s.io=offline-docker-733000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:34:45.050970   16500 oci.go:103] Successfully created a docker volume offline-docker-733000
	I0429 05:34:45.051116   16500 cli_runner.go:164] Run: docker run --rm --name offline-docker-733000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-733000 --entrypoint /usr/bin/test -v offline-docker-733000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:34:45.732732   16500 oci.go:107] Successfully prepared a docker volume offline-docker-733000
	I0429 05:34:45.732778   16500 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:34:45.732794   16500 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:34:45.732886   16500 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-733000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:40:44.611723   16500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:40:44.611872   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:44.664254   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:40:44.664391   16500 retry.go:31] will retry after 317.301005ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:44.982986   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:45.034820   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:40:45.034936   16500 retry.go:31] will retry after 217.535477ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:45.254841   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:45.305732   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:40:45.305839   16500 retry.go:31] will retry after 740.212237ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:46.048429   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:46.102119   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	W0429 05:40:46.102229   16500 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	
	W0429 05:40:46.102250   16500 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:46.102310   16500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:40:46.102362   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:46.172444   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:40:46.172533   16500 retry.go:31] will retry after 346.852681ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:46.520060   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:46.571546   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:40:46.571645   16500 retry.go:31] will retry after 523.621989ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:47.096511   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:47.150083   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:40:47.150180   16500 retry.go:31] will retry after 283.853787ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:47.434477   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:40:47.484078   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	W0429 05:40:47.484181   16500 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	
	W0429 05:40:47.484201   16500 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:47.484217   16500 start.go:128] duration metric: took 6m2.915687753s to createHost
	I0429 05:40:47.484224   16500 start.go:83] releasing machines lock for "offline-docker-733000", held for 6m2.916168375s
	W0429 05:40:47.484240   16500 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 05:40:47.484670   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:47.531465   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:47.531525   16500 delete.go:82] Unable to get host status for offline-docker-733000, assuming it has already been deleted: state: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	W0429 05:40:47.531588   16500 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 05:40:47.531598   16500 start.go:728] Will try again in 5 seconds ...
	I0429 05:40:52.532793   16500 start.go:360] acquireMachinesLock for offline-docker-733000: {Name:mk612a9e507635e36280d2546b44301efe0fa47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:40:52.533951   16500 start.go:364] duration metric: took 197.578µs to acquireMachinesLock for "offline-docker-733000"
	I0429 05:40:52.534007   16500 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:40:52.534024   16500 fix.go:54] fixHost starting: 
	I0429 05:40:52.534569   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:52.584210   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:52.584258   16500 fix.go:112] recreateIfNeeded on offline-docker-733000: state= err=unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:52.584277   16500 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:40:52.627616   16500 out.go:177] * docker "offline-docker-733000" container is missing, will recreate.
	I0429 05:40:52.648672   16500 delete.go:124] DEMOLISHING offline-docker-733000 ...
	I0429 05:40:52.648889   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:52.698793   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	W0429 05:40:52.698840   16500 stop.go:83] unable to get state: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:52.698860   16500 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:52.699230   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:52.746714   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:52.746778   16500 delete.go:82] Unable to get host status for offline-docker-733000, assuming it has already been deleted: state: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:52.746859   16500 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-733000
	W0429 05:40:52.794709   16500 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-733000 returned with exit code 1
	I0429 05:40:52.794747   16500 kic.go:371] could not find the container offline-docker-733000 to remove it. will try anyways
	I0429 05:40:52.794818   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:52.842051   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	W0429 05:40:52.842116   16500 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:52.842187   16500 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-733000 /bin/bash -c "sudo init 0"
	W0429 05:40:52.890458   16500 cli_runner.go:211] docker exec --privileged -t offline-docker-733000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:40:52.890497   16500 oci.go:650] error shutdown offline-docker-733000: docker exec --privileged -t offline-docker-733000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:53.892924   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:53.944944   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:53.945010   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:53.945019   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:40:53.945042   16500 retry.go:31] will retry after 646.569938ms: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:54.592954   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:54.646710   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:54.646762   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:54.646772   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:40:54.646803   16500 retry.go:31] will retry after 575.836551ms: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:55.222959   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:55.273489   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:55.273550   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:55.273560   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:40:55.273582   16500 retry.go:31] will retry after 1.309577945s: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:56.583985   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:56.636519   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:56.636566   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:56.636576   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:40:56.636601   16500 retry.go:31] will retry after 1.815726013s: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:58.453146   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:40:58.504328   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:40:58.504373   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:40:58.504384   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:40:58.504417   16500 retry.go:31] will retry after 1.903844926s: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:41:00.410654   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:41:00.461781   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:00.461831   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:41:00.461843   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:41:00.461870   16500 retry.go:31] will retry after 4.513705672s: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:41:04.976988   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:41:05.031765   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:05.031813   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:41:05.031825   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:41:05.031851   16500 retry.go:31] will retry after 8.531051129s: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:41:13.563810   16500 cli_runner.go:164] Run: docker container inspect offline-docker-733000 --format={{.State.Status}}
	W0429 05:41:13.630903   16500 cli_runner.go:211] docker container inspect offline-docker-733000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:13.630951   16500 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:41:13.630961   16500 oci.go:664] temporary error: container offline-docker-733000 status is  but expect it to be exited
	I0429 05:41:13.630991   16500 oci.go:88] couldn't shut down offline-docker-733000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	 
	I0429 05:41:13.631071   16500 cli_runner.go:164] Run: docker rm -f -v offline-docker-733000
	I0429 05:41:13.679438   16500 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-733000
	W0429 05:41:13.727004   16500 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-733000 returned with exit code 1
	I0429 05:41:13.727126   16500 cli_runner.go:164] Run: docker network inspect offline-docker-733000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:41:13.775521   16500 cli_runner.go:164] Run: docker network rm offline-docker-733000
	I0429 05:41:13.882049   16500 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:41:14.882598   16500 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:41:14.905856   16500 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:41:14.906025   16500 start.go:159] libmachine.API.Create for "offline-docker-733000" (driver="docker")
	I0429 05:41:14.906052   16500 client.go:168] LocalClient.Create starting
	I0429 05:41:14.906254   16500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:41:14.906351   16500 main.go:141] libmachine: Decoding PEM data...
	I0429 05:41:14.906382   16500 main.go:141] libmachine: Parsing certificate...
	I0429 05:41:14.906459   16500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:41:14.906533   16500 main.go:141] libmachine: Decoding PEM data...
	I0429 05:41:14.906548   16500 main.go:141] libmachine: Parsing certificate...
	I0429 05:41:14.907284   16500 cli_runner.go:164] Run: docker network inspect offline-docker-733000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:41:14.957513   16500 cli_runner.go:211] docker network inspect offline-docker-733000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:41:14.957612   16500 network_create.go:281] running [docker network inspect offline-docker-733000] to gather additional debugging logs...
	I0429 05:41:14.957631   16500 cli_runner.go:164] Run: docker network inspect offline-docker-733000
	W0429 05:41:15.005295   16500 cli_runner.go:211] docker network inspect offline-docker-733000 returned with exit code 1
	I0429 05:41:15.005327   16500 network_create.go:284] error running [docker network inspect offline-docker-733000]: docker network inspect offline-docker-733000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-733000 not found
	I0429 05:41:15.005338   16500 network_create.go:286] output of [docker network inspect offline-docker-733000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-733000 not found
	
	** /stderr **
	I0429 05:41:15.005474   16500 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:41:15.054946   16500 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:15.056437   16500 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:15.057861   16500 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:15.059214   16500 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:15.060794   16500 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:15.061199   16500 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000012f40}
	I0429 05:41:15.061213   16500 network_create.go:124] attempt to create docker network offline-docker-733000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0429 05:41:15.061283   16500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-733000 offline-docker-733000
	I0429 05:41:15.146318   16500 network_create.go:108] docker network offline-docker-733000 192.168.94.0/24 created
	I0429 05:41:15.146353   16500 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-733000" container
	I0429 05:41:15.146458   16500 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:41:15.196484   16500 cli_runner.go:164] Run: docker volume create offline-docker-733000 --label name.minikube.sigs.k8s.io=offline-docker-733000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:41:15.244520   16500 oci.go:103] Successfully created a docker volume offline-docker-733000
	I0429 05:41:15.244643   16500 cli_runner.go:164] Run: docker run --rm --name offline-docker-733000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-733000 --entrypoint /usr/bin/test -v offline-docker-733000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:41:15.490204   16500 oci.go:107] Successfully prepared a docker volume offline-docker-733000
	I0429 05:41:15.490237   16500 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:41:15.490256   16500 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:41:15.490356   16500 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-733000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:47:14.918129   16500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:47:14.918257   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:14.968903   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:14.969016   16500 retry.go:31] will retry after 297.14977ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:15.268608   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:15.320684   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:15.320802   16500 retry.go:31] will retry after 388.928609ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:15.712143   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:15.764370   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:15.764475   16500 retry.go:31] will retry after 305.28246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:16.070100   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:16.122913   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:16.123028   16500 retry.go:31] will retry after 497.066788ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:16.620939   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:16.672793   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	W0429 05:47:16.672906   16500 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	
	W0429 05:47:16.672924   16500 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:16.672984   16500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:47:16.673045   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:16.721508   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:16.721608   16500 retry.go:31] will retry after 263.842415ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:16.985716   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:17.036587   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:17.036687   16500 retry.go:31] will retry after 254.519765ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:17.292939   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:17.347494   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:17.347594   16500 retry.go:31] will retry after 738.166508ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:18.088156   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:18.141466   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	W0429 05:47:18.141583   16500 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	
	W0429 05:47:18.141598   16500 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:18.141611   16500 start.go:128] duration metric: took 6m3.248094109s to createHost
	I0429 05:47:18.141677   16500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:47:18.141730   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:18.189587   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:18.189686   16500 retry.go:31] will retry after 371.588735ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:18.563674   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:18.639445   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:18.639562   16500 retry.go:31] will retry after 556.147581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:19.198101   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:19.250236   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:19.250330   16500 retry.go:31] will retry after 372.458477ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:19.624079   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:19.674187   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	W0429 05:47:19.674289   16500 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	
	W0429 05:47:19.674306   16500 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:19.674375   16500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:47:19.674428   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:19.722946   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:19.723038   16500 retry.go:31] will retry after 345.519395ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:20.070934   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:20.120689   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:20.120778   16500 retry.go:31] will retry after 447.711192ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:20.570357   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:20.621078   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	I0429 05:47:20.621175   16500 retry.go:31] will retry after 343.45633ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:20.965244   16500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000
	W0429 05:47:21.018416   16500 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000 returned with exit code 1
	W0429 05:47:21.018527   16500 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	
	W0429 05:47:21.018544   16500 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-733000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-733000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000
	I0429 05:47:21.018549   16500 fix.go:56] duration metric: took 6m28.472875697s for fixHost
	I0429 05:47:21.018555   16500 start.go:83] releasing machines lock for "offline-docker-733000", held for 6m28.472922775s
	W0429 05:47:21.018628   16500 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-733000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-733000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 05:47:21.061088   16500 out.go:177] 
	W0429 05:47:21.081850   16500 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 05:47:21.081875   16500 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 05:47:21.081912   16500 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 05:47:21.102858   16500 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-733000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-29 05:47:21.179593 -0700 PDT m=+6349.873672936
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-733000
helpers_test.go:235: (dbg) docker inspect offline-docker-733000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-733000",
	        "Id": "3c27f2e38330068453a6ed794ded29a9c9ce0162c25286850b3502ab46224f6d",
	        "Created": "2024-04-29T12:41:15.10630251Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-733000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-733000 -n offline-docker-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-733000 -n offline-docker-733000: exit status 7 (111.993674ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:47:21.341715   17109 status.go:249] status error: host: state: unknown state "offline-docker-733000": docker container inspect offline-docker-733000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-733000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-733000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-733000
--- FAIL: TestOffline (758.51s)

                                                
                                    
x
+
TestCertOptions (7201.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-499000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (1m34s)
	TestCertOptions (1m1s)
	TestNetworkPlugins (26m48s)

                                                
                                                
goroutine 2446 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a20340, 0xc000ad9bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007d84f8, {0xc5c5fc0, 0x2a, 0x2a}, {0x8117aa5?, 0x9c4de19?, 0xc5e8d80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000a9c780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000a9c780)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00057eb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1089 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc0008b7440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1074
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 801 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0029f0ed0, 0x2b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xad2e3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002316120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029f0f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021fb9a0, {0xb240760, 0xc0021e6390}, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021fb9a0, 0x3b9aca00, 0x0, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 793
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2432 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x53f144a0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002760ae0?, 0xc0021dd298?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002760ae0, {0xc0021dd298, 0x568, 0x568})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a3a100, {0xc0021dd298?, 0xc000112548?, 0x22e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002870810, {0xb23f178, 0xc0020e6100})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xb23f2b8, 0xc002870810}, {0xb23f178, 0xc0020e6100}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0xb23f2b8, 0xc002870810})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000684fc0?, {0xb23f2b8?, 0xc002870810?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xb23f2b8, 0xc002870810}, {0xb23f238, 0xc002a3a100}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026e8a80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 522
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 71 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 70
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2128 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067f860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067f860, 0xc0001aca00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1066 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002169ce0, 0xc0028ef5c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 712
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2434 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0xc001361340, 0xc0027664e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 522
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2140 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0024b8d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0024b8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0024b8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0024b8d00, 0xb234538)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 521 [syscall]:
syscall.syscall6(0xc002871f80?, 0x1000000000010?, 0x10000000019?, 0x540014c8?, 0x90?, 0xcf02108?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0020b18a0?, 0x80580a5?, 0x90?, 0xb1a1140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x8188c45?, 0xc0020b18d4, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0008832f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0013611e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0013611e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a20ea0, 0xc0013611e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc000a20ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc000a20ea0, 0xb234478)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2445 [select]:
os/exec.(*Cmd).watchCtx(0xc0013611e0, 0xc0029647e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 521
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2044 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000848ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000848ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc000848ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc000848ea0, 0xb234560)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 162 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021aec60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 163 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00096aa40, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 168 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00096aa10, 0x2c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xad2e3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021aeb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00096aa40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006b7d40, {0xb240760, 0xc0021e7380}, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b7d40, 0x3b9aca00, 0x0, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 169 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xb264240, 0xc0000663c0}, 0xc000113f50, 0xc002261f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xb264240, 0xc0000663c0}, 0xc0?, 0xc000113f50, 0xc000113f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xb264240?, 0xc0000663c0?}, 0xc000113fd0?, 0x844987b?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x84498c0?, 0xc000456200?, 0xc000113fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 170 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2043 [chan receive, 27 minutes]:
testing.(*T).Run(0xc0008484e0, {0x9bf48e7?, 0x5d665d44545?}, 0xc0023141f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0008484e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0008484e0, 0xb234558)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2045 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000849520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000849520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc000849520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc000849520, 0xb234570)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 611 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x53f14e50, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0001ad380?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001ad380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0001ad380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc002a80880)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc002a80880)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009ec0f0, {0xb2570f0, 0xc002a80880})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009ec0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0008496c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 592
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2443 [IO wait]:
internal/poll.runtime_pollWait(0x53f15040, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0027b6ae0?, 0xc0023c328f?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0027b6ae0, {0xc0023c328f, 0x571, 0x571})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a3a0d8, {0xc0023c328f?, 0xc002402380?, 0x225?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028708a0, {0xb23f178, 0xc0020e6130})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xb23f2b8, 0xc0028708a0}, {0xb23f178, 0xc0020e6130}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000094678?, {0xb23f2b8, 0xc0028708a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000094738?, {0xb23f2b8?, 0xc0028708a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xb23f2b8, 0xc0028708a0}, {0xb23f238, 0xc002a3a0d8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002964720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 521
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 522 [syscall, 1 minutes]:
syscall.syscall6(0xc002871f80?, 0x1000000000010?, 0x10000000019?, 0x539cdf80?, 0x90?, 0xcf02108?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0022c1a40?, 0x80580a5?, 0x90?, 0xb1a1140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x8188c45?, 0xc0022c1a74, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002a3c4b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001361340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001361340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a21a00, 0xc001361340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc000a21a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc000a21a00, 0xb234470)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2126 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067f520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067f520, 0xc0001ac900)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1056 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc0008b7440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1074
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 963 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a126e0, 0xc0028ee780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 962
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 792 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002316240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 791
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2433 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x53f14c60, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002760ba0?, 0xc000af6200?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002760ba0, {0xc000af6200, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a3a118, {0xc000af6200?, 0x53d93968?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002870840, {0xb23f178, 0xc0020e6110})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xb23f2b8, 0xc002870840}, {0xb23f178, 0xc0020e6110}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000113678?, {0xb23f2b8, 0xc002870840})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000113738?, {0xb23f2b8?, 0xc002870840?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xb23f2b8, 0xc002870840}, {0xb23f238, 0xc002a3a118}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0029641e0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 522
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2145 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067fa00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067fa00, 0xc0001aca80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1015 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002168840, 0xc0028ee900)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1014
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 789 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002168420, 0xc0029fdda0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 788
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2138 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0024b89c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0024b89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0024b89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0024b89c0, 0xb2345a8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2132 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0024b8000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0024b8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0024b8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0024b8000, 0xb2345a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 803 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 802
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 793 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029f0f00, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 791
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2444 [IO wait]:
internal/poll.runtime_pollWait(0x53f14880, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0027b6ba0?, 0xc000af7200?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0027b6ba0, {0xc000af7200, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002a3a0f8, {0xc000af7200?, 0x9?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028708d0, {0xb23f178, 0xc0020e6140})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xb23f2b8, 0xc0028708d0}, {0xb23f178, 0xc0020e6140}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0xb23f2b8, 0xc0028708d0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x804fa1e?, {0xb23f2b8?, 0xc0028708d0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xb23f2b8, 0xc0028708d0}, {0xb23f238, 0xc002a3a0f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002594d80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 521
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 802 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xb264240, 0xc0000663c0}, 0xc002105f50, 0xc000a5df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xb264240, 0xc0000663c0}, 0x80?, 0xc002105f50, 0xc002105f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xb264240?, 0xc0000663c0?}, 0x3832396132356233?, 0x3338313632613035?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x81d1ba5?, 0xc0021bf1e0?, 0xc000066d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 793
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2127 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067f6c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067f6c0, 0xc0001ac980)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2123 [chan receive, 27 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00067e000, 0xc0023141f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2043
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2139 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0024b8b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0024b8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0024b8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0024b8b60, 0xb234520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2124 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067e340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067e340, 0xc0001ac700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2125 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067eb60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067eb60, 0xc0001ac880)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2137 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0024b8820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0024b8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0024b8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0024b8820, 0xb234580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2147 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067fd40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067fd40, 0xc0001ad500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2148 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000849860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000849860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000849860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000849860, 0xc0001ad600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1759 [syscall, 93 minutes]:
syscall.syscall(0x0?, 0xc000677ed8?, 0x80fff05?, 0xc0001166b0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0001166f0?, 0xc0023ad180?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1751
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2146 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000654a50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00067fba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00067fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00067fba0, 0xc0001ad400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2123
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestDockerFlags (759.92s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-064000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0429 05:48:20.504760    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:52:18.637227    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:52:35.582628    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:53:20.485475    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:57:35.588157    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:58:03.539871    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:58:20.489066    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-064000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m38.627374003s)

                                                
                                                
-- stdout --
	* [docker-flags-064000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-064000" primary control-plane node in "docker-flags-064000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-064000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:47:50.595269   17264 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:47:50.595466   17264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:47:50.595471   17264 out.go:304] Setting ErrFile to fd 2...
	I0429 05:47:50.595475   17264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:47:50.595660   17264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:47:50.597192   17264 out.go:298] Setting JSON to false
	I0429 05:47:50.619352   17264 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8240,"bootTime":1714386630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 05:47:50.619451   17264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:47:50.641633   17264 out.go:177] * [docker-flags-064000] minikube v1.33.0 on Darwin 14.4.1
	I0429 05:47:50.685606   17264 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 05:47:50.685614   17264 notify.go:220] Checking for updates...
	I0429 05:47:50.708294   17264 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 05:47:50.728457   17264 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 05:47:50.749433   17264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:47:50.770529   17264 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 05:47:50.812257   17264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:47:50.834387   17264 config.go:182] Loaded profile config "force-systemd-flag-239000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:47:50.834556   17264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:47:50.889063   17264 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 05:47:50.889250   17264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:47:50.998103   17264 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-29 12:47:50.987060152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:47:51.019976   17264 out.go:177] * Using the docker driver based on user configuration
	I0429 05:47:51.061820   17264 start.go:297] selected driver: docker
	I0429 05:47:51.061855   17264 start.go:901] validating driver "docker" against <nil>
	I0429 05:47:51.061869   17264 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:47:51.066182   17264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:47:51.171268   17264 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-29 12:47:51.160727167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:47:51.171460   17264 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:47:51.171636   17264 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0429 05:47:51.193683   17264 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 05:47:51.215380   17264 cni.go:84] Creating CNI manager for ""
	I0429 05:47:51.215424   17264 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:47:51.215439   17264 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:47:51.215524   17264 start.go:340] cluster config:
	{Name:docker-flags-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:47:51.237528   17264 out.go:177] * Starting "docker-flags-064000" primary control-plane node in "docker-flags-064000" cluster
	I0429 05:47:51.279488   17264 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 05:47:51.300487   17264 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 05:47:51.342252   17264 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:47:51.342288   17264 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 05:47:51.342309   17264 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 05:47:51.342329   17264 cache.go:56] Caching tarball of preloaded images
	I0429 05:47:51.342523   17264 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 05:47:51.342535   17264 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:47:51.342671   17264 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/docker-flags-064000/config.json ...
	I0429 05:47:51.343076   17264 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/docker-flags-064000/config.json: {Name:mk38f6595f4f57be5d1e9b1f56dd2eebbf21c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:47:51.392021   17264 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 05:47:51.392049   17264 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 05:47:51.392070   17264 cache.go:194] Successfully downloaded all kic artifacts
	I0429 05:47:51.392121   17264 start.go:360] acquireMachinesLock for docker-flags-064000: {Name:mk4be84983964b94f4feff8191e592b1ce51fc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:47:51.392279   17264 start.go:364] duration metric: took 146.132µs to acquireMachinesLock for "docker-flags-064000"
	I0429 05:47:51.392318   17264 start.go:93] Provisioning new machine with config: &{Name:docker-flags-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-064000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:47:51.392389   17264 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:47:51.434504   17264 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:47:51.434865   17264 start.go:159] libmachine.API.Create for "docker-flags-064000" (driver="docker")
	I0429 05:47:51.434914   17264 client.go:168] LocalClient.Create starting
	I0429 05:47:51.435140   17264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:47:51.435241   17264 main.go:141] libmachine: Decoding PEM data...
	I0429 05:47:51.435275   17264 main.go:141] libmachine: Parsing certificate...
	I0429 05:47:51.435372   17264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:47:51.435447   17264 main.go:141] libmachine: Decoding PEM data...
	I0429 05:47:51.435461   17264 main.go:141] libmachine: Parsing certificate...
	I0429 05:47:51.436325   17264 cli_runner.go:164] Run: docker network inspect docker-flags-064000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:47:51.484708   17264 cli_runner.go:211] docker network inspect docker-flags-064000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:47:51.484802   17264 network_create.go:281] running [docker network inspect docker-flags-064000] to gather additional debugging logs...
	I0429 05:47:51.484818   17264 cli_runner.go:164] Run: docker network inspect docker-flags-064000
	W0429 05:47:51.532766   17264 cli_runner.go:211] docker network inspect docker-flags-064000 returned with exit code 1
	I0429 05:47:51.532806   17264 network_create.go:284] error running [docker network inspect docker-flags-064000]: docker network inspect docker-flags-064000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-064000 not found
	I0429 05:47:51.532818   17264 network_create.go:286] output of [docker network inspect docker-flags-064000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-064000 not found
	
	** /stderr **
	I0429 05:47:51.532948   17264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:47:51.583027   17264 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:51.584659   17264 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:51.586259   17264 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:51.587640   17264 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:51.588006   17264 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000b7c3f0}
	I0429 05:47:51.588022   17264 network_create.go:124] attempt to create docker network docker-flags-064000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 05:47:51.588090   17264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-064000 docker-flags-064000
	I0429 05:47:51.672106   17264 network_create.go:108] docker network docker-flags-064000 192.168.85.0/24 created
	I0429 05:47:51.672145   17264 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-064000" container
	I0429 05:47:51.672258   17264 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:47:51.722441   17264 cli_runner.go:164] Run: docker volume create docker-flags-064000 --label name.minikube.sigs.k8s.io=docker-flags-064000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:47:51.772895   17264 oci.go:103] Successfully created a docker volume docker-flags-064000
	I0429 05:47:51.773012   17264 cli_runner.go:164] Run: docker run --rm --name docker-flags-064000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-064000 --entrypoint /usr/bin/test -v docker-flags-064000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:47:52.085393   17264 oci.go:107] Successfully prepared a docker volume docker-flags-064000
	I0429 05:47:52.085435   17264 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:47:52.085453   17264 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:47:52.085564   17264 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-064000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:53:51.415699   17264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:53:51.415861   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:51.467123   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:51.467251   17264 retry.go:31] will retry after 168.466972ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:51.638140   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:51.688735   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:51.688828   17264 retry.go:31] will retry after 313.09615ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:52.003532   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:52.055447   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:52.055544   17264 retry.go:31] will retry after 316.226307ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:52.372123   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:52.422110   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:52.422212   17264 retry.go:31] will retry after 753.336638ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:53.176313   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:53.228664   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	W0429 05:53:53.228764   17264 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	
	W0429 05:53:53.228791   17264 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:53.228847   17264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:53:53.228898   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:53.276919   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:53.277011   17264 retry.go:31] will retry after 320.806121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:53.600231   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:53.651037   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:53.651129   17264 retry.go:31] will retry after 471.354955ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:54.124481   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:54.174847   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 05:53:54.174953   17264 retry.go:31] will retry after 687.431517ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:54.863189   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 05:53:54.915776   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	W0429 05:53:54.915873   17264 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	
	W0429 05:53:54.915893   17264 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:53:54.915911   17264 start.go:128] duration metric: took 6m3.543225127s to createHost
	I0429 05:53:54.915918   17264 start.go:83] releasing machines lock for "docker-flags-064000", held for 6m3.543344018s
	W0429 05:53:54.915934   17264 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 05:53:54.916383   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:53:54.965860   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:54.965919   17264 delete.go:82] Unable to get host status for docker-flags-064000, assuming it has already been deleted: state: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	W0429 05:53:54.966033   17264 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 05:53:54.966042   17264 start.go:728] Will try again in 5 seconds ...
	I0429 05:53:59.967292   17264 start.go:360] acquireMachinesLock for docker-flags-064000: {Name:mk4be84983964b94f4feff8191e592b1ce51fc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:53:59.967533   17264 start.go:364] duration metric: took 188.983µs to acquireMachinesLock for "docker-flags-064000"
	I0429 05:53:59.967573   17264 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:53:59.967591   17264 fix.go:54] fixHost starting: 
	I0429 05:53:59.968109   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:00.022301   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:00.022343   17264 fix.go:112] recreateIfNeeded on docker-flags-064000: state= err=unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:00.022375   17264 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:54:00.044081   17264 out.go:177] * docker "docker-flags-064000" container is missing, will recreate.
	I0429 05:54:00.085918   17264 delete.go:124] DEMOLISHING docker-flags-064000 ...
	I0429 05:54:00.086110   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:00.135881   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	W0429 05:54:00.135939   17264 stop.go:83] unable to get state: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:00.135959   17264 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:00.136353   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:00.184312   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:00.184374   17264 delete.go:82] Unable to get host status for docker-flags-064000, assuming it has already been deleted: state: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:00.184453   17264 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-064000
	W0429 05:54:00.232161   17264 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-064000 returned with exit code 1
	I0429 05:54:00.232198   17264 kic.go:371] could not find the container docker-flags-064000 to remove it. will try anyways
	I0429 05:54:00.232268   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:00.280433   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	W0429 05:54:00.280481   17264 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:00.280557   17264 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-064000 /bin/bash -c "sudo init 0"
	W0429 05:54:00.328444   17264 cli_runner.go:211] docker exec --privileged -t docker-flags-064000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:54:00.328477   17264 oci.go:650] error shutdown docker-flags-064000: docker exec --privileged -t docker-flags-064000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:01.330023   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:01.379496   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:01.379560   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:01.379575   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:01.379604   17264 retry.go:31] will retry after 282.155501ms: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:01.662920   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:01.715527   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:01.715574   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:01.715583   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:01.715605   17264 retry.go:31] will retry after 649.450688ms: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:02.366193   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:02.417633   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:02.417680   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:02.417693   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:02.417716   17264 retry.go:31] will retry after 1.651873502s: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:04.071939   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:04.123498   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:04.123546   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:04.123562   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:04.123587   17264 retry.go:31] will retry after 1.973922802s: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:06.098579   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:06.149273   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:06.149324   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:06.149334   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:06.149358   17264 retry.go:31] will retry after 1.519205287s: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:07.668880   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:07.720268   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:07.720316   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:07.720326   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:07.720349   17264 retry.go:31] will retry after 5.215807238s: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:12.937212   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:12.989911   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:12.989960   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:12.989970   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:12.989995   17264 retry.go:31] will retry after 3.558921474s: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:16.550393   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:16.602252   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:16.602306   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:16.602319   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:16.602341   17264 retry.go:31] will retry after 4.289100278s: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:20.893165   17264 cli_runner.go:164] Run: docker container inspect docker-flags-064000 --format={{.State.Status}}
	W0429 05:54:20.946242   17264 cli_runner.go:211] docker container inspect docker-flags-064000 --format={{.State.Status}} returned with exit code 1
	I0429 05:54:20.946289   17264 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 05:54:20.946306   17264 oci.go:664] temporary error: container docker-flags-064000 status is  but expect it to be exited
	I0429 05:54:20.946340   17264 oci.go:88] couldn't shut down docker-flags-064000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	 
	I0429 05:54:20.946421   17264 cli_runner.go:164] Run: docker rm -f -v docker-flags-064000
	I0429 05:54:20.995358   17264 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-064000
	W0429 05:54:21.043703   17264 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-064000 returned with exit code 1
	I0429 05:54:21.043817   17264 cli_runner.go:164] Run: docker network inspect docker-flags-064000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:54:21.092665   17264 cli_runner.go:164] Run: docker network rm docker-flags-064000
	I0429 05:54:21.198278   17264 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:54:22.200478   17264 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:54:22.222691   17264 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:54:22.222863   17264 start.go:159] libmachine.API.Create for "docker-flags-064000" (driver="docker")
	I0429 05:54:22.222887   17264 client.go:168] LocalClient.Create starting
	I0429 05:54:22.223094   17264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:54:22.223192   17264 main.go:141] libmachine: Decoding PEM data...
	I0429 05:54:22.223216   17264 main.go:141] libmachine: Parsing certificate...
	I0429 05:54:22.223318   17264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:54:22.223393   17264 main.go:141] libmachine: Decoding PEM data...
	I0429 05:54:22.223408   17264 main.go:141] libmachine: Parsing certificate...
	I0429 05:54:22.224187   17264 cli_runner.go:164] Run: docker network inspect docker-flags-064000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:54:22.273704   17264 cli_runner.go:211] docker network inspect docker-flags-064000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:54:22.273811   17264 network_create.go:281] running [docker network inspect docker-flags-064000] to gather additional debugging logs...
	I0429 05:54:22.273830   17264 cli_runner.go:164] Run: docker network inspect docker-flags-064000
	W0429 05:54:22.322457   17264 cli_runner.go:211] docker network inspect docker-flags-064000 returned with exit code 1
	I0429 05:54:22.322486   17264 network_create.go:284] error running [docker network inspect docker-flags-064000]: docker network inspect docker-flags-064000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-064000 not found
	I0429 05:54:22.322501   17264 network_create.go:286] output of [docker network inspect docker-flags-064000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-064000 not found
	
	** /stderr **
	I0429 05:54:22.322644   17264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:54:22.372509   17264 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:54:22.374012   17264 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:54:22.375573   17264 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:54:22.377114   17264 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:54:22.378653   17264 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:54:22.380211   17264 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:54:22.380566   17264 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021c9880}
	I0429 05:54:22.380578   17264 network_create.go:124] attempt to create docker network docker-flags-064000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0429 05:54:22.380645   17264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-064000 docker-flags-064000
	I0429 05:54:22.464311   17264 network_create.go:108] docker network docker-flags-064000 192.168.103.0/24 created
	I0429 05:54:22.464346   17264 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-064000" container
	I0429 05:54:22.464456   17264 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:54:22.515617   17264 cli_runner.go:164] Run: docker volume create docker-flags-064000 --label name.minikube.sigs.k8s.io=docker-flags-064000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:54:22.563491   17264 oci.go:103] Successfully created a docker volume docker-flags-064000
	I0429 05:54:22.563601   17264 cli_runner.go:164] Run: docker run --rm --name docker-flags-064000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-064000 --entrypoint /usr/bin/test -v docker-flags-064000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:54:22.793331   17264 oci.go:107] Successfully prepared a docker volume docker-flags-064000
	I0429 05:54:22.793371   17264 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:54:22.793392   17264 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:54:22.793521   17264 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-064000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 06:00:22.229949   17264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 06:00:22.230087   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:22.281344   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:22.281452   17264 retry.go:31] will retry after 334.133045ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:22.617871   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:22.668982   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:22.669090   17264 retry.go:31] will retry after 221.59438ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:22.891326   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:22.940540   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:22.940636   17264 retry.go:31] will retry after 732.25096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:23.673415   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:23.724146   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	W0429 06:00:23.724254   17264 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	
	W0429 06:00:23.724276   17264 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:23.724329   17264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 06:00:23.724382   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:23.773250   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:23.773345   17264 retry.go:31] will retry after 237.969519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:24.013525   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:24.064022   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:24.064120   17264 retry.go:31] will retry after 209.675083ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:24.276204   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:24.328522   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:24.328617   17264 retry.go:31] will retry after 341.093422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:24.669962   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:24.720949   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:24.721042   17264 retry.go:31] will retry after 862.087595ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:25.583337   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:25.657550   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	W0429 06:00:25.657651   17264 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	
	W0429 06:00:25.657672   17264 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:25.657676   17264 start.go:128] duration metric: took 6m3.450310252s to createHost
	I0429 06:00:25.657740   17264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 06:00:25.657801   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:25.705324   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:25.705414   17264 retry.go:31] will retry after 333.527873ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:26.039185   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:26.090029   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:26.090123   17264 retry.go:31] will retry after 489.272371ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:26.580692   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:26.634014   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:26.634118   17264 retry.go:31] will retry after 737.137746ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:27.372192   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:27.421435   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	W0429 06:00:27.421543   17264 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	
	W0429 06:00:27.421562   17264 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:27.421625   17264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 06:00:27.421683   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:27.469843   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:27.469939   17264 retry.go:31] will retry after 142.899884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:27.615181   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:27.664302   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:27.664400   17264 retry.go:31] will retry after 409.074217ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:28.075873   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:28.128156   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	I0429 06:00:28.128244   17264 retry.go:31] will retry after 818.191888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:28.948824   17264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000
	W0429 06:00:29.000151   17264 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000 returned with exit code 1
	W0429 06:00:29.000246   17264 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	
	W0429 06:00:29.000268   17264 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-064000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-064000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	I0429 06:00:29.000281   17264 fix.go:56] duration metric: took 6m29.025383883s for fixHost
	I0429 06:00:29.000287   17264 start.go:83] releasing machines lock for "docker-flags-064000", held for 6m29.025430521s
	W0429 06:00:29.000388   17264 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-064000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-064000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 06:00:29.043873   17264 out.go:177] 
	W0429 06:00:29.065050   17264 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 06:00:29.065096   17264 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 06:00:29.065134   17264 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 06:00:29.087961   17264 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-064000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-064000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-064000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (201.115862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-064000 host status: state: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-064000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-064000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-064000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (197.439818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-064000 host status: state: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-064000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-064000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-29 06:00:29.561529 -0700 PDT m=+7138.266999909
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-064000
helpers_test.go:235: (dbg) docker inspect docker-flags-064000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-064000",
	        "Id": "29d4c93dba9d0dc2f15e88eac53b70d44521a77d34ef7b92d04b391927b64f3d",
	        "Created": "2024-04-29T12:54:22.424101137Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-064000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-064000 -n docker-flags-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-064000 -n docker-flags-064000: exit status 7 (111.72587ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 06:00:29.723155   17854 status.go:249] status error: host: state: unknown state "docker-flags-064000": docker container inspect docker-flags-064000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-064000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-064000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-064000
--- FAIL: TestDockerFlags (759.92s)

                                                
                                    
x
+
TestForceSystemdFlag (755.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-239000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E0429 05:47:35.603278    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-239000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.017836858s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-239000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-239000" primary control-plane node in "force-systemd-flag-239000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-239000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:47:22.120319   17133 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:47:22.120594   17133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:47:22.120600   17133 out.go:304] Setting ErrFile to fd 2...
	I0429 05:47:22.120608   17133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:47:22.120803   17133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:47:22.122274   17133 out.go:298] Setting JSON to false
	I0429 05:47:22.145835   17133 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8212,"bootTime":1714386630,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 05:47:22.145962   17133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:47:22.167757   17133 out.go:177] * [force-systemd-flag-239000] minikube v1.33.0 on Darwin 14.4.1
	I0429 05:47:22.210389   17133 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 05:47:22.210466   17133 notify.go:220] Checking for updates...
	I0429 05:47:22.254197   17133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 05:47:22.275331   17133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 05:47:22.296442   17133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:47:22.317423   17133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 05:47:22.338253   17133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:47:22.360259   17133 config.go:182] Loaded profile config "force-systemd-env-746000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:47:22.360447   17133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:47:22.415204   17133 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 05:47:22.415389   17133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:47:22.525226   17133 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-29 12:47:22.513956189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:47:22.567610   17133 out.go:177] * Using the docker driver based on user configuration
	I0429 05:47:22.588800   17133 start.go:297] selected driver: docker
	I0429 05:47:22.588838   17133 start.go:901] validating driver "docker" against <nil>
	I0429 05:47:22.588853   17133 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:47:22.593242   17133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:47:22.696959   17133 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-29 12:47:22.6858712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:47:22.697150   17133 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:47:22.697348   17133 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 05:47:22.719092   17133 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 05:47:22.740777   17133 cni.go:84] Creating CNI manager for ""
	I0429 05:47:22.740821   17133 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:47:22.740838   17133 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:47:22.740958   17133 start.go:340] cluster config:
	{Name:force-systemd-flag-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:47:22.762822   17133 out.go:177] * Starting "force-systemd-flag-239000" primary control-plane node in "force-systemd-flag-239000" cluster
	I0429 05:47:22.804820   17133 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 05:47:22.826805   17133 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 05:47:22.868667   17133 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:47:22.868718   17133 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 05:47:22.868744   17133 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 05:47:22.868760   17133 cache.go:56] Caching tarball of preloaded images
	I0429 05:47:22.869020   17133 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 05:47:22.869041   17133 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:47:22.870023   17133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/force-systemd-flag-239000/config.json ...
	I0429 05:47:22.870208   17133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/force-systemd-flag-239000/config.json: {Name:mk3eb8d41a96209fda6396bacaac0635bb2cc216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:47:22.920980   17133 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 05:47:22.920999   17133 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 05:47:22.921018   17133 cache.go:194] Successfully downloaded all kic artifacts
	I0429 05:47:22.921062   17133 start.go:360] acquireMachinesLock for force-systemd-flag-239000: {Name:mk11062f25d114310632647dcf390052df7231d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:47:22.921223   17133 start.go:364] duration metric: took 149.751µs to acquireMachinesLock for "force-systemd-flag-239000"
	I0429 05:47:22.921260   17133 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-239000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:47:22.921336   17133 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:47:22.963692   17133 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:47:22.964042   17133 start.go:159] libmachine.API.Create for "force-systemd-flag-239000" (driver="docker")
	I0429 05:47:22.964091   17133 client.go:168] LocalClient.Create starting
	I0429 05:47:22.964289   17133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:47:22.964386   17133 main.go:141] libmachine: Decoding PEM data...
	I0429 05:47:22.964419   17133 main.go:141] libmachine: Parsing certificate...
	I0429 05:47:22.964525   17133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:47:22.964613   17133 main.go:141] libmachine: Decoding PEM data...
	I0429 05:47:22.964630   17133 main.go:141] libmachine: Parsing certificate...
	I0429 05:47:22.965495   17133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-239000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:47:23.014193   17133 cli_runner.go:211] docker network inspect force-systemd-flag-239000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:47:23.014300   17133 network_create.go:281] running [docker network inspect force-systemd-flag-239000] to gather additional debugging logs...
	I0429 05:47:23.014316   17133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-239000
	W0429 05:47:23.062754   17133 cli_runner.go:211] docker network inspect force-systemd-flag-239000 returned with exit code 1
	I0429 05:47:23.062782   17133 network_create.go:284] error running [docker network inspect force-systemd-flag-239000]: docker network inspect force-systemd-flag-239000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-239000 not found
	I0429 05:47:23.062795   17133 network_create.go:286] output of [docker network inspect force-systemd-flag-239000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-239000 not found
	
	** /stderr **
	I0429 05:47:23.062935   17133 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:47:23.113363   17133 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:23.114987   17133 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:23.115357   17133 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002281b80}
	I0429 05:47:23.115374   17133 network_create.go:124] attempt to create docker network force-systemd-flag-239000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 05:47:23.115441   17133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-239000 force-systemd-flag-239000
	W0429 05:47:23.163773   17133 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-239000 force-systemd-flag-239000 returned with exit code 1
	W0429 05:47:23.163814   17133 network_create.go:149] failed to create docker network force-systemd-flag-239000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-239000 force-systemd-flag-239000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 05:47:23.163836   17133 network_create.go:116] failed to create docker network force-systemd-flag-239000 192.168.67.0/24, will retry: subnet is taken
	I0429 05:47:23.165234   17133 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:47:23.165605   17133 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022662e0}
	I0429 05:47:23.165617   17133 network_create.go:124] attempt to create docker network force-systemd-flag-239000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 05:47:23.165685   17133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-239000 force-systemd-flag-239000
	I0429 05:47:23.249005   17133 network_create.go:108] docker network force-systemd-flag-239000 192.168.76.0/24 created
	I0429 05:47:23.249048   17133 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-239000" container
	I0429 05:47:23.249187   17133 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:47:23.300634   17133 cli_runner.go:164] Run: docker volume create force-systemd-flag-239000 --label name.minikube.sigs.k8s.io=force-systemd-flag-239000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:47:23.350250   17133 oci.go:103] Successfully created a docker volume force-systemd-flag-239000
	I0429 05:47:23.350361   17133 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-239000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-239000 --entrypoint /usr/bin/test -v force-systemd-flag-239000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:47:23.664519   17133 oci.go:107] Successfully prepared a docker volume force-systemd-flag-239000
	I0429 05:47:23.664560   17133 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:47:23.664586   17133 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:47:23.664696   17133 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-239000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:53:22.947063   17133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:53:22.947204   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:22.999729   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:22.999856   17133 retry.go:31] will retry after 247.626161ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:23.249855   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:23.302402   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:23.302516   17133 retry.go:31] will retry after 500.329061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:23.804072   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:23.854602   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:23.854693   17133 retry.go:31] will retry after 682.51235ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:24.539638   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:24.592332   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	W0429 05:53:24.592444   17133 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	
	W0429 05:53:24.592461   17133 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:24.592519   17133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:53:24.592581   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:24.643313   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:24.643415   17133 retry.go:31] will retry after 262.841148ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:24.908716   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:24.960812   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:24.960915   17133 retry.go:31] will retry after 464.329421ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:25.427706   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:25.477740   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:25.477833   17133 retry.go:31] will retry after 517.194322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:25.996859   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:53:26.050414   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	W0429 05:53:26.050512   17133 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	
	W0429 05:53:26.050535   17133 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:26.050547   17133 start.go:128] duration metric: took 6m3.148604182s to createHost
	I0429 05:53:26.050556   17133 start.go:83] releasing machines lock for "force-systemd-flag-239000", held for 6m3.148728391s
	W0429 05:53:26.050571   17133 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 05:53:26.050996   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:26.098855   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:26.098913   17133 delete.go:82] Unable to get host status for force-systemd-flag-239000, assuming it has already been deleted: state: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	W0429 05:53:26.099018   17133 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 05:53:26.099027   17133 start.go:728] Will try again in 5 seconds ...
	I0429 05:53:31.101307   17133 start.go:360] acquireMachinesLock for force-systemd-flag-239000: {Name:mk11062f25d114310632647dcf390052df7231d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:53:31.101657   17133 start.go:364] duration metric: took 184.984µs to acquireMachinesLock for "force-systemd-flag-239000"
	I0429 05:53:31.101702   17133 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:53:31.101719   17133 fix.go:54] fixHost starting: 
	I0429 05:53:31.102234   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:31.154404   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:31.154465   17133 fix.go:112] recreateIfNeeded on force-systemd-flag-239000: state= err=unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:31.154482   17133 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:53:31.176273   17133 out.go:177] * docker "force-systemd-flag-239000" container is missing, will recreate.
	I0429 05:53:31.197830   17133 delete.go:124] DEMOLISHING force-systemd-flag-239000 ...
	I0429 05:53:31.198008   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:31.246933   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	W0429 05:53:31.246987   17133 stop.go:83] unable to get state: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:31.247007   17133 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:31.247388   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:31.295477   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:31.295545   17133 delete.go:82] Unable to get host status for force-systemd-flag-239000, assuming it has already been deleted: state: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:31.295639   17133 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-239000
	W0429 05:53:31.343448   17133 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:31.343511   17133 kic.go:371] could not find the container force-systemd-flag-239000 to remove it. will try anyways
	I0429 05:53:31.343592   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:31.390786   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	W0429 05:53:31.390836   17133 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:31.390943   17133 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-239000 /bin/bash -c "sudo init 0"
	W0429 05:53:31.438796   17133 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-239000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:53:31.438825   17133 oci.go:650] error shutdown force-systemd-flag-239000: docker exec --privileged -t force-systemd-flag-239000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:32.439303   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:32.489487   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:32.489532   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:32.489541   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:32.489565   17133 retry.go:31] will retry after 284.083209ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:32.775479   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:32.828220   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:32.828268   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:32.828283   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:32.828308   17133 retry.go:31] will retry after 488.043794ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:33.316643   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:33.367378   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:33.367430   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:33.367444   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:33.367471   17133 retry.go:31] will retry after 1.449253587s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:34.819103   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:34.869071   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:34.869114   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:34.869131   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:34.869159   17133 retry.go:31] will retry after 1.629593591s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:36.501079   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:36.552345   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:36.552391   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:36.552401   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:36.552423   17133 retry.go:31] will retry after 3.711405727s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:40.266185   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:40.318859   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:40.318911   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:40.318923   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:40.318951   17133 retry.go:31] will retry after 2.198882401s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:42.519562   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:42.568948   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:42.568993   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:42.569001   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:42.569024   17133 retry.go:31] will retry after 4.220292159s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:46.791746   17133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-239000 --format={{.State.Status}}
	W0429 05:53:46.843645   17133 cli_runner.go:211] docker container inspect force-systemd-flag-239000 --format={{.State.Status}} returned with exit code 1
	I0429 05:53:46.843689   17133 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:53:46.843698   17133 oci.go:664] temporary error: container force-systemd-flag-239000 status is  but expect it to be exited
	I0429 05:53:46.843729   17133 oci.go:88] couldn't shut down force-systemd-flag-239000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	 
	I0429 05:53:46.843797   17133 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-239000
	I0429 05:53:46.892619   17133 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-239000
	W0429 05:53:46.940724   17133 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:46.940839   17133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-239000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:53:46.989628   17133 cli_runner.go:164] Run: docker network rm force-systemd-flag-239000
	I0429 05:53:47.090110   17133 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:53:48.090263   17133 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:53:48.112230   17133 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:53:48.112416   17133 start.go:159] libmachine.API.Create for "force-systemd-flag-239000" (driver="docker")
	I0429 05:53:48.112443   17133 client.go:168] LocalClient.Create starting
	I0429 05:53:48.112666   17133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:53:48.112784   17133 main.go:141] libmachine: Decoding PEM data...
	I0429 05:53:48.112806   17133 main.go:141] libmachine: Parsing certificate...
	I0429 05:53:48.112893   17133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:53:48.112964   17133 main.go:141] libmachine: Decoding PEM data...
	I0429 05:53:48.112979   17133 main.go:141] libmachine: Parsing certificate...
	I0429 05:53:48.134509   17133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-239000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:53:48.183634   17133 cli_runner.go:211] docker network inspect force-systemd-flag-239000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:53:48.183726   17133 network_create.go:281] running [docker network inspect force-systemd-flag-239000] to gather additional debugging logs...
	I0429 05:53:48.183742   17133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-239000
	W0429 05:53:48.232107   17133 cli_runner.go:211] docker network inspect force-systemd-flag-239000 returned with exit code 1
	I0429 05:53:48.232139   17133 network_create.go:284] error running [docker network inspect force-systemd-flag-239000]: docker network inspect force-systemd-flag-239000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-239000 not found
	I0429 05:53:48.232153   17133 network_create.go:286] output of [docker network inspect force-systemd-flag-239000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-239000 not found
	
	** /stderr **
	I0429 05:53:48.232277   17133 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:53:48.283140   17133 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:53:48.284735   17133 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:53:48.286233   17133 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:53:48.287904   17133 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:53:48.289627   17133 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:53:48.290108   17133 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000739fc0}
	I0429 05:53:48.290131   17133 network_create.go:124] attempt to create docker network force-systemd-flag-239000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0429 05:53:48.290222   17133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-239000 force-systemd-flag-239000
	I0429 05:53:48.375111   17133 network_create.go:108] docker network force-systemd-flag-239000 192.168.94.0/24 created
	I0429 05:53:48.375148   17133 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-239000" container
	I0429 05:53:48.375250   17133 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:53:48.425667   17133 cli_runner.go:164] Run: docker volume create force-systemd-flag-239000 --label name.minikube.sigs.k8s.io=force-systemd-flag-239000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:53:48.473877   17133 oci.go:103] Successfully created a docker volume force-systemd-flag-239000
	I0429 05:53:48.473990   17133 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-239000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-239000 --entrypoint /usr/bin/test -v force-systemd-flag-239000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:53:48.731757   17133 oci.go:107] Successfully prepared a docker volume force-systemd-flag-239000
	I0429 05:53:48.731795   17133 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:53:48.731814   17133 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:53:48.731942   17133 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-239000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:59:48.120463   17133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:59:48.120589   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:48.171875   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:48.171987   17133 retry.go:31] will retry after 178.339246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:48.350853   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:48.402265   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:48.402383   17133 retry.go:31] will retry after 265.781418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:48.669326   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:48.721944   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:48.722041   17133 retry.go:31] will retry after 827.251805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:49.551678   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:49.604043   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:49.604154   17133 retry.go:31] will retry after 475.548962ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:50.081661   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:50.134072   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	W0429 05:59:50.134171   17133 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	
	W0429 05:59:50.134191   17133 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:50.134245   17133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:59:50.134315   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:50.183164   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:50.183264   17133 retry.go:31] will retry after 216.590001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:50.400952   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:50.450988   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:50.451138   17133 retry.go:31] will retry after 224.944805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:50.678453   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:50.728090   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:50.728188   17133 retry.go:31] will retry after 646.575579ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:51.377170   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:51.429407   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:51.429508   17133 retry.go:31] will retry after 584.487192ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:52.016379   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:52.066452   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	W0429 05:59:52.066559   17133 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	
	W0429 05:59:52.066580   17133 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:52.066589   17133 start.go:128] duration metric: took 6m3.96946837s to createHost
	I0429 05:59:52.066654   17133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:59:52.066715   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:52.134731   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:52.134817   17133 retry.go:31] will retry after 201.595971ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:52.338746   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:52.448877   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:52.449001   17133 retry.go:31] will retry after 236.654578ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:52.686003   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:52.737950   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:52.738043   17133 retry.go:31] will retry after 379.849478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:53.119359   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:53.170640   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:53.170736   17133 retry.go:31] will retry after 785.015029ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:53.957063   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:54.009935   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	W0429 05:59:54.010037   17133 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	
	W0429 05:59:54.010053   17133 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:54.010110   17133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:59:54.010166   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:54.060973   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:54.061067   17133 retry.go:31] will retry after 172.273961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:54.235697   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:54.285994   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:54.286086   17133 retry.go:31] will retry after 469.184531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:54.756065   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:54.809121   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:54.809216   17133 retry.go:31] will retry after 465.374552ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:55.275832   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:55.328775   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	I0429 05:59:55.328868   17133 retry.go:31] will retry after 515.77016ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:55.846093   17133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000
	W0429 05:59:55.897404   17133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000 returned with exit code 1
	W0429 05:59:55.897503   17133 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	
	W0429 05:59:55.897519   17133 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-239000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-239000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	I0429 05:59:55.897536   17133 fix.go:56] duration metric: took 6m24.788589486s for fixHost
	I0429 05:59:55.897544   17133 start.go:83] releasing machines lock for "force-systemd-flag-239000", held for 6m24.788643322s
	W0429 05:59:55.897619   17133 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-239000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-239000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 05:59:55.939795   17133 out.go:177] 
	W0429 05:59:55.961203   17133 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 05:59:55.961264   17133 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 05:59:55.961288   17133 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 05:59:55.982802   17133 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-239000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-239000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-239000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (196.614218ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-239000 host status: state: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-239000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-29 05:59:56.276963 -0700 PDT m=+7104.983060134
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-239000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-239000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-239000",
	        "Id": "715bed214c467d762a1725b9d9bb684d28d338f6fd2d42e015eb040cd2074ba8",
	        "Created": "2024-04-29T12:53:48.335607337Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-239000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-239000 -n force-systemd-flag-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-239000 -n force-systemd-flag-239000: exit status 7 (112.546717ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:59:56.439601   17706 status.go:249] status error: host: state: unknown state "force-systemd-flag-239000": docker container inspect force-systemd-flag-239000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-239000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-239000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-239000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-239000
--- FAIL: TestForceSystemdFlag (755.10s)

                                                
                                    
x
+
TestForceSystemdEnv (754.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-746000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0429 05:35:38.636106    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:37:35.585711    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:38:20.488853    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:41:23.541009    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:42:35.594527    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:43:20.497692    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-746000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.349643742s)

                                                
                                                
-- stdout --
	* [force-systemd-env-746000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-746000" primary control-plane node in "force-systemd-env-746000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-746000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:35:16.131784   16708 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:35:16.132049   16708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:35:16.132054   16708 out.go:304] Setting ErrFile to fd 2...
	I0429 05:35:16.132058   16708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:35:16.132232   16708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:35:16.133733   16708 out.go:298] Setting JSON to false
	I0429 05:35:16.155694   16708 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7486,"bootTime":1714386630,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 05:35:16.155789   16708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:35:16.177836   16708 out.go:177] * [force-systemd-env-746000] minikube v1.33.0 on Darwin 14.4.1
	I0429 05:35:16.241564   16708 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 05:35:16.219648   16708 notify.go:220] Checking for updates...
	I0429 05:35:16.283562   16708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 05:35:16.304720   16708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 05:35:16.325402   16708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:35:16.346524   16708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 05:35:16.388346   16708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0429 05:35:16.410544   16708 config.go:182] Loaded profile config "offline-docker-733000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:35:16.410713   16708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:35:16.464906   16708 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 05:35:16.465072   16708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:35:16.571895   16708 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-29 12:35:16.561471604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:35:16.593874   16708 out.go:177] * Using the docker driver based on user configuration
	I0429 05:35:16.615358   16708 start.go:297] selected driver: docker
	I0429 05:35:16.615401   16708 start.go:901] validating driver "docker" against <nil>
	I0429 05:35:16.615420   16708 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:35:16.619745   16708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:35:16.724432   16708 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-29 12:35:16.713782698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:35:16.724612   16708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 05:35:16.724788   16708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 05:35:16.746274   16708 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 05:35:16.767873   16708 cni.go:84] Creating CNI manager for ""
	I0429 05:35:16.767916   16708 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 05:35:16.767929   16708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 05:35:16.768026   16708 start.go:340] cluster config:
	{Name:force-systemd-env-746000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:35:16.788925   16708 out.go:177] * Starting "force-systemd-env-746000" primary control-plane node in "force-systemd-env-746000" cluster
	I0429 05:35:16.831085   16708 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 05:35:16.852039   16708 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 05:35:16.893946   16708 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:35:16.893993   16708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 05:35:16.894037   16708 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 05:35:16.894053   16708 cache.go:56] Caching tarball of preloaded images
	I0429 05:35:16.894262   16708 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 05:35:16.894284   16708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:35:16.895163   16708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/force-systemd-env-746000/config.json ...
	I0429 05:35:16.895403   16708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/force-systemd-env-746000/config.json: {Name:mk9e497b2cbde04f7541564c81cbe9e03766dec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 05:35:16.946359   16708 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 05:35:16.946389   16708 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 05:35:16.946409   16708 cache.go:194] Successfully downloaded all kic artifacts
	I0429 05:35:16.946453   16708 start.go:360] acquireMachinesLock for force-systemd-env-746000: {Name:mk555dc2b68bdce40228ad56636106c182cb3658 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:35:16.946617   16708 start.go:364] duration metric: took 152.284µs to acquireMachinesLock for "force-systemd-env-746000"
	I0429 05:35:16.946645   16708 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-746000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-746000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 05:35:16.946715   16708 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:35:16.989202   16708 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:35:16.989613   16708 start.go:159] libmachine.API.Create for "force-systemd-env-746000" (driver="docker")
	I0429 05:35:16.989670   16708 client.go:168] LocalClient.Create starting
	I0429 05:35:16.989906   16708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:35:16.990020   16708 main.go:141] libmachine: Decoding PEM data...
	I0429 05:35:16.990054   16708 main.go:141] libmachine: Parsing certificate...
	I0429 05:35:16.990159   16708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:35:16.990238   16708 main.go:141] libmachine: Decoding PEM data...
	I0429 05:35:16.990252   16708 main.go:141] libmachine: Parsing certificate...
	I0429 05:35:16.991111   16708 cli_runner.go:164] Run: docker network inspect force-systemd-env-746000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:35:17.040107   16708 cli_runner.go:211] docker network inspect force-systemd-env-746000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:35:17.040216   16708 network_create.go:281] running [docker network inspect force-systemd-env-746000] to gather additional debugging logs...
	I0429 05:35:17.040232   16708 cli_runner.go:164] Run: docker network inspect force-systemd-env-746000
	W0429 05:35:17.090577   16708 cli_runner.go:211] docker network inspect force-systemd-env-746000 returned with exit code 1
	I0429 05:35:17.090611   16708 network_create.go:284] error running [docker network inspect force-systemd-env-746000]: docker network inspect force-systemd-env-746000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-746000 not found
	I0429 05:35:17.090622   16708 network_create.go:286] output of [docker network inspect force-systemd-env-746000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-746000 not found
	
	** /stderr **
	I0429 05:35:17.090738   16708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:35:17.140614   16708 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:35:17.142225   16708 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:35:17.143858   16708 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:35:17.145586   16708 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:35:17.146103   16708 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002402f10}
	I0429 05:35:17.146125   16708 network_create.go:124] attempt to create docker network force-systemd-env-746000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 05:35:17.146232   16708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-746000 force-systemd-env-746000
	I0429 05:35:17.230736   16708 network_create.go:108] docker network force-systemd-env-746000 192.168.85.0/24 created
	I0429 05:35:17.230778   16708 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-746000" container
	I0429 05:35:17.230899   16708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:35:17.280323   16708 cli_runner.go:164] Run: docker volume create force-systemd-env-746000 --label name.minikube.sigs.k8s.io=force-systemd-env-746000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:35:17.329503   16708 oci.go:103] Successfully created a docker volume force-systemd-env-746000
	I0429 05:35:17.329626   16708 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-746000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-746000 --entrypoint /usr/bin/test -v force-systemd-env-746000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:35:17.694358   16708 oci.go:107] Successfully prepared a docker volume force-systemd-env-746000
	I0429 05:35:17.694405   16708 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:35:17.694419   16708 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:35:17.694513   16708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-746000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:41:17.002136   16708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:41:17.002273   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:17.054783   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:41:17.054918   16708 retry.go:31] will retry after 343.165481ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:17.399676   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:17.451882   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:41:17.451995   16708 retry.go:31] will retry after 442.359702ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:17.896788   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:17.947001   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:41:17.947096   16708 retry.go:31] will retry after 281.959403ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:18.231418   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:18.282422   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	W0429 05:41:18.282529   16708 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	
	W0429 05:41:18.282548   16708 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:18.282608   16708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:41:18.282671   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:18.329441   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:41:18.329532   16708 retry.go:31] will retry after 185.803394ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:18.516774   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:18.567237   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:41:18.567330   16708 retry.go:31] will retry after 464.052049ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:19.033755   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:19.085228   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:41:19.085324   16708 retry.go:31] will retry after 473.096338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:19.560874   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:41:19.616900   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	W0429 05:41:19.617003   16708 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	
	W0429 05:41:19.617026   16708 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:19.617044   16708 start.go:128] duration metric: took 6m2.659474576s to createHost
	I0429 05:41:19.617057   16708 start.go:83] releasing machines lock for "force-systemd-env-746000", held for 6m2.659591757s
	W0429 05:41:19.617072   16708 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 05:41:19.617545   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:19.667204   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:19.667269   16708 delete.go:82] Unable to get host status for force-systemd-env-746000, assuming it has already been deleted: state: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	W0429 05:41:19.667347   16708 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 05:41:19.667357   16708 start.go:728] Will try again in 5 seconds ...
	I0429 05:41:24.668757   16708 start.go:360] acquireMachinesLock for force-systemd-env-746000: {Name:mk555dc2b68bdce40228ad56636106c182cb3658 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:41:24.669746   16708 start.go:364] duration metric: took 777.998µs to acquireMachinesLock for "force-systemd-env-746000"
	I0429 05:41:24.669859   16708 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:41:24.669882   16708 fix.go:54] fixHost starting: 
	I0429 05:41:24.670420   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:24.721136   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:24.721183   16708 fix.go:112] recreateIfNeeded on force-systemd-env-746000: state= err=unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:24.721200   16708 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:41:24.742993   16708 out.go:177] * docker "force-systemd-env-746000" container is missing, will recreate.
	I0429 05:41:24.785360   16708 delete.go:124] DEMOLISHING force-systemd-env-746000 ...
	I0429 05:41:24.785474   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:24.833454   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	W0429 05:41:24.833514   16708 stop.go:83] unable to get state: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:24.833535   16708 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:24.833952   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:24.881860   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:24.881918   16708 delete.go:82] Unable to get host status for force-systemd-env-746000, assuming it has already been deleted: state: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:24.882002   16708 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-746000
	W0429 05:41:24.929933   16708 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-746000 returned with exit code 1
	I0429 05:41:24.929988   16708 kic.go:371] could not find the container force-systemd-env-746000 to remove it. will try anyways
	I0429 05:41:24.930064   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:24.977744   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	W0429 05:41:24.977791   16708 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:24.977870   16708 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-746000 /bin/bash -c "sudo init 0"
	W0429 05:41:25.025991   16708 cli_runner.go:211] docker exec --privileged -t force-systemd-env-746000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:41:25.026023   16708 oci.go:650] error shutdown force-systemd-env-746000: docker exec --privileged -t force-systemd-env-746000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:26.027909   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:26.078938   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:26.078986   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:26.078997   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:26.079021   16708 retry.go:31] will retry after 421.470595ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:26.502919   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:26.554080   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:26.554134   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:26.554145   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:26.554167   16708 retry.go:31] will retry after 887.194407ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:27.443747   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:27.494725   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:27.494779   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:27.494792   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:27.494815   16708 retry.go:31] will retry after 1.309210328s: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:28.805518   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:28.856668   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:28.856719   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:28.856736   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:28.856758   16708 retry.go:31] will retry after 1.679144995s: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:30.537067   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:30.588223   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:30.588278   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:30.588290   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:30.588314   16708 retry.go:31] will retry after 1.286753053s: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:31.875650   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:31.930071   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:31.930121   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:31.930133   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:31.930152   16708 retry.go:31] will retry after 4.09025773s: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:36.021323   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:36.073208   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:36.073258   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:36.073269   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:36.073298   16708 retry.go:31] will retry after 5.444692022s: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:41.518405   16708 cli_runner.go:164] Run: docker container inspect force-systemd-env-746000 --format={{.State.Status}}
	W0429 05:41:41.568504   16708 cli_runner.go:211] docker container inspect force-systemd-env-746000 --format={{.State.Status}} returned with exit code 1
	I0429 05:41:41.568551   16708 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:41:41.568562   16708 oci.go:664] temporary error: container force-systemd-env-746000 status is  but expect it to be exited
	I0429 05:41:41.568591   16708 oci.go:88] couldn't shut down force-systemd-env-746000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	 
	I0429 05:41:41.568667   16708 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-746000
	I0429 05:41:41.616956   16708 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-746000
	W0429 05:41:41.665674   16708 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-746000 returned with exit code 1
	I0429 05:41:41.665788   16708 cli_runner.go:164] Run: docker network inspect force-systemd-env-746000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:41:41.714217   16708 cli_runner.go:164] Run: docker network rm force-systemd-env-746000
	I0429 05:41:41.814008   16708 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:41:42.816189   16708 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:41:42.838434   16708 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 05:41:42.838614   16708 start.go:159] libmachine.API.Create for "force-systemd-env-746000" (driver="docker")
	I0429 05:41:42.838643   16708 client.go:168] LocalClient.Create starting
	I0429 05:41:42.838852   16708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:41:42.838949   16708 main.go:141] libmachine: Decoding PEM data...
	I0429 05:41:42.838976   16708 main.go:141] libmachine: Parsing certificate...
	I0429 05:41:42.839067   16708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:41:42.839142   16708 main.go:141] libmachine: Decoding PEM data...
	I0429 05:41:42.839157   16708 main.go:141] libmachine: Parsing certificate...
	I0429 05:41:42.839859   16708 cli_runner.go:164] Run: docker network inspect force-systemd-env-746000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:41:42.889598   16708 cli_runner.go:211] docker network inspect force-systemd-env-746000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:41:42.889698   16708 network_create.go:281] running [docker network inspect force-systemd-env-746000] to gather additional debugging logs...
	I0429 05:41:42.889712   16708 cli_runner.go:164] Run: docker network inspect force-systemd-env-746000
	W0429 05:41:42.938051   16708 cli_runner.go:211] docker network inspect force-systemd-env-746000 returned with exit code 1
	I0429 05:41:42.938082   16708 network_create.go:284] error running [docker network inspect force-systemd-env-746000]: docker network inspect force-systemd-env-746000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-746000 not found
	I0429 05:41:42.938095   16708 network_create.go:286] output of [docker network inspect force-systemd-env-746000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-746000 not found
	
	** /stderr **
	I0429 05:41:42.938214   16708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:41:42.988542   16708 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:42.990067   16708 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:42.991645   16708 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:42.992946   16708 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:42.994499   16708 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:42.996059   16708 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:41:42.996471   16708 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021fd990}
	I0429 05:41:42.996488   16708 network_create.go:124] attempt to create docker network force-systemd-env-746000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0429 05:41:42.996555   16708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-746000 force-systemd-env-746000
	I0429 05:41:43.080221   16708 network_create.go:108] docker network force-systemd-env-746000 192.168.103.0/24 created
	I0429 05:41:43.080273   16708 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-746000" container
	I0429 05:41:43.080380   16708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:41:43.130605   16708 cli_runner.go:164] Run: docker volume create force-systemd-env-746000 --label name.minikube.sigs.k8s.io=force-systemd-env-746000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:41:43.179597   16708 oci.go:103] Successfully created a docker volume force-systemd-env-746000
	I0429 05:41:43.179706   16708 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-746000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-746000 --entrypoint /usr/bin/test -v force-systemd-env-746000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:41:43.426192   16708 oci.go:107] Successfully prepared a docker volume force-systemd-env-746000
	I0429 05:41:43.426243   16708 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:41:43.426260   16708 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:41:43.426380   16708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-746000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:47:42.850790   16708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:47:42.850917   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:42.902518   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:42.902633   16708 retry.go:31] will retry after 366.844617ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:43.271230   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:43.322631   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:43.322752   16708 retry.go:31] will retry after 408.710962ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:43.733137   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:43.782725   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:43.782825   16708 retry.go:31] will retry after 458.178409ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:44.243465   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:44.293650   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	W0429 05:47:44.293765   16708 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	
	W0429 05:47:44.293787   16708 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:44.293839   16708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:47:44.293905   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:44.341812   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:44.341909   16708 retry.go:31] will retry after 303.659112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:44.647461   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:44.696688   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:44.696782   16708 retry.go:31] will retry after 475.960984ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:45.175159   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:45.226310   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:45.226402   16708 retry.go:31] will retry after 435.791969ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:45.663450   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:45.714612   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	W0429 05:47:45.714716   16708 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	
	W0429 05:47:45.714734   16708 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:45.714742   16708 start.go:128] duration metric: took 6m2.887622082s to createHost
	I0429 05:47:45.714815   16708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:47:45.714879   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:45.763852   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:45.763948   16708 retry.go:31] will retry after 251.240804ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:46.016701   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:46.069605   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:46.069708   16708 retry.go:31] will retry after 395.929653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:46.467325   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:46.516632   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:46.516729   16708 retry.go:31] will retry after 328.952004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:46.846219   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:46.898747   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:46.898843   16708 retry.go:31] will retry after 791.734269ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:47.692983   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:47.742580   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	W0429 05:47:47.742689   16708 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	
	W0429 05:47:47.742706   16708 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:47.742768   16708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:47:47.742829   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:47.790830   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:47.790924   16708 retry.go:31] will retry after 349.109373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:48.141304   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:48.190152   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:48.190250   16708 retry.go:31] will retry after 192.134498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:48.383651   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:48.435496   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	I0429 05:47:48.435591   16708 retry.go:31] will retry after 804.428574ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:49.240862   16708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000
	W0429 05:47:49.290474   16708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000 returned with exit code 1
	W0429 05:47:49.290585   16708 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	
	W0429 05:47:49.290605   16708 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-746000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-746000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	I0429 05:47:49.290623   16708 fix.go:56] duration metric: took 6m24.609208078s for fixHost
	I0429 05:47:49.290630   16708 start.go:83] releasing machines lock for "force-systemd-env-746000", held for 6m24.609266744s
	W0429 05:47:49.290727   16708 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-746000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-746000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 05:47:49.333349   16708 out.go:177] 
	W0429 05:47:49.354393   16708 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 05:47:49.354427   16708 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 05:47:49.354464   16708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 05:47:49.397370   16708 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-746000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-746000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-746000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (198.65227ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-746000 host status: state: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-746000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-29 05:47:49.65141 -0700 PDT m=+6378.344635159
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-746000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-746000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-746000",
	        "Id": "e7a032de94d3f596b43215e82411427f45cf5b3be98ac6b507d7e263520fde45",
	        "Created": "2024-04-29T12:41:43.040426377Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-746000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-746000 -n force-systemd-env-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-746000 -n force-systemd-env-746000: exit status 7 (112.217113ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:47:49.816263   17240 status.go:249] status error: host: state: unknown state "force-systemd-env-746000": docker container inspect force-systemd-env-746000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-746000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-746000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-746000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-746000
--- FAIL: TestForceSystemdEnv (754.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (893.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-750000 ssh -- ls /minikube-host
E0429 04:32:35.299236    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:33:20.201972    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:34:43.304141    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:37:35.364498    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:38:20.268136    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:42:35.373958    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:43:20.276513    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:45:38.429595    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-750000 ssh -- ls /minikube-host: signal: killed (14m52.822798813s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-750000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-750000
helpers_test.go:235: (dbg) docker inspect mount-start-1-750000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "431b820dee375bd1d3deca2c3b8c3270509b09a9191507fd54f8ee405e44692d",
	        "Created": "2024-04-29T11:31:19.119396873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 121740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T11:31:19.277217762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c2e7b1115438f0e876ee0c793febc72a876a26c7b12b8e5475b223c894686c4",
	        "ResolvConfPath": "/var/lib/docker/containers/431b820dee375bd1d3deca2c3b8c3270509b09a9191507fd54f8ee405e44692d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/431b820dee375bd1d3deca2c3b8c3270509b09a9191507fd54f8ee405e44692d/hostname",
	        "HostsPath": "/var/lib/docker/containers/431b820dee375bd1d3deca2c3b8c3270509b09a9191507fd54f8ee405e44692d/hosts",
	        "LogPath": "/var/lib/docker/containers/431b820dee375bd1d3deca2c3b8c3270509b09a9191507fd54f8ee405e44692d/431b820dee375bd1d3deca2c3b8c3270509b09a9191507fd54f8ee405e44692d-json.log",
	        "Name": "/mount-start-1-750000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-1-750000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-750000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/482e4bec1a9ddc7fb6fe1858821921fc79a39cada5719d72c9074642595c25c5-init/diff:/var/lib/docker/overlay2/124d4fc60143f2384e3048c74312e927d283e8b76937cd1fee44f9acf7b4acdf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/482e4bec1a9ddc7fb6fe1858821921fc79a39cada5719d72c9074642595c25c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/482e4bec1a9ddc7fb6fe1858821921fc79a39cada5719d72c9074642595c25c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/482e4bec1a9ddc7fb6fe1858821921fc79a39cada5719d72c9074642595c25c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-750000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-750000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-750000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-750000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-750000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f3a0a5fdcf5d6d8c9e996be55355b90ffe435abaf41260330f121bbcd8f6f19f",
	            "SandboxKey": "/var/run/docker/netns/f3a0a5fdcf5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54520"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54517"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-750000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "35bd779a6a3da2722b5671f612ffe6e7a915866bc7aed0dda49e8d0c7f914e5f",
	                    "EndpointID": "a53ceef76a133c9b0297d8a0d3691e0d915bef08dabc676fe0b53be28b9e582d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-1-750000",
	                        "431b820dee37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-750000 -n mount-start-1-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-750000 -n mount-start-1-750000: exit status 6 (371.247027ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 04:46:17.863980   14286 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-750000" does not appear in /Users/jenkins/minikube-integration/18756-6674/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-750000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (893.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (756.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-888000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0429 04:47:35.382649    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:48:20.284838    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:51:23.431224    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:52:35.487865    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:53:20.389968    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:57:35.496936    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:58:20.399389    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-888000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.850254654s)

                                                
                                                
-- stdout --
	* [multinode-888000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-888000" primary control-plane node in "multinode-888000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-888000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:47:26.900810   14400 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:47:26.901083   14400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:47:26.901102   14400 out.go:304] Setting ErrFile to fd 2...
	I0429 04:47:26.901106   14400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:47:26.901355   14400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 04:47:26.903126   14400 out.go:298] Setting JSON to false
	I0429 04:47:26.925394   14400 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4616,"bootTime":1714386630,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 04:47:26.925484   14400 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:47:26.947263   14400 out.go:177] * [multinode-888000] minikube v1.33.0 on Darwin 14.4.1
	I0429 04:47:27.012939   14400 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 04:47:26.991256   14400 notify.go:220] Checking for updates...
	I0429 04:47:27.060592   14400 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 04:47:27.081510   14400 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 04:47:27.102720   14400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:47:27.123625   14400 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 04:47:27.144280   14400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:47:27.166046   14400 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:47:27.220509   14400 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 04:47:27.220687   14400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:47:27.327687   14400 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-29 11:47:27.316937762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:47:27.349506   14400 out.go:177] * Using the docker driver based on user configuration
	I0429 04:47:27.371084   14400 start.go:297] selected driver: docker
	I0429 04:47:27.371123   14400 start.go:901] validating driver "docker" against <nil>
	I0429 04:47:27.371148   14400 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:47:27.375651   14400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:47:27.481028   14400 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-29 11:47:27.470417625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:47:27.481209   14400 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:47:27.481397   14400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 04:47:27.503290   14400 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 04:47:27.524114   14400 cni.go:84] Creating CNI manager for ""
	I0429 04:47:27.524147   14400 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 04:47:27.524159   14400 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 04:47:27.524269   14400 start.go:340] cluster config:
	{Name:multinode-888000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-888000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:47:27.545860   14400 out.go:177] * Starting "multinode-888000" primary control-plane node in "multinode-888000" cluster
	I0429 04:47:27.587986   14400 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 04:47:27.608815   14400 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 04:47:27.651145   14400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:47:27.651198   14400 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 04:47:27.651218   14400 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 04:47:27.651247   14400 cache.go:56] Caching tarball of preloaded images
	I0429 04:47:27.651472   14400 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 04:47:27.651494   14400 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 04:47:27.653028   14400 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/multinode-888000/config.json ...
	I0429 04:47:27.653152   14400 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/multinode-888000/config.json: {Name:mk5a69ad10afc13e70e29f2fe4251d32385b4ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:47:27.702380   14400 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 04:47:27.702409   14400 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 04:47:27.702429   14400 cache.go:194] Successfully downloaded all kic artifacts
	I0429 04:47:27.702500   14400 start.go:360] acquireMachinesLock for multinode-888000: {Name:mk7ef4e0a331afdc76a7a1515dd33ef411b9e213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:47:27.702663   14400 start.go:364] duration metric: took 149.931µs to acquireMachinesLock for "multinode-888000"
	I0429 04:47:27.702691   14400 start.go:93] Provisioning new machine with config: &{Name:multinode-888000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-888000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 04:47:27.702765   14400 start.go:125] createHost starting for "" (driver="docker")
	I0429 04:47:27.744953   14400 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 04:47:27.745260   14400 start.go:159] libmachine.API.Create for "multinode-888000" (driver="docker")
	I0429 04:47:27.745295   14400 client.go:168] LocalClient.Create starting
	I0429 04:47:27.745444   14400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 04:47:27.745520   14400 main.go:141] libmachine: Decoding PEM data...
	I0429 04:47:27.745541   14400 main.go:141] libmachine: Parsing certificate...
	I0429 04:47:27.745605   14400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 04:47:27.745655   14400 main.go:141] libmachine: Decoding PEM data...
	I0429 04:47:27.745664   14400 main.go:141] libmachine: Parsing certificate...
	I0429 04:47:27.746272   14400 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 04:47:27.793215   14400 cli_runner.go:211] docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 04:47:27.793332   14400 network_create.go:281] running [docker network inspect multinode-888000] to gather additional debugging logs...
	I0429 04:47:27.793351   14400 cli_runner.go:164] Run: docker network inspect multinode-888000
	W0429 04:47:27.841250   14400 cli_runner.go:211] docker network inspect multinode-888000 returned with exit code 1
	I0429 04:47:27.841281   14400 network_create.go:284] error running [docker network inspect multinode-888000]: docker network inspect multinode-888000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-888000 not found
	I0429 04:47:27.841291   14400 network_create.go:286] output of [docker network inspect multinode-888000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-888000 not found
	
	** /stderr **
	I0429 04:47:27.841419   14400 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 04:47:27.892146   14400 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:47:27.893908   14400 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:47:27.894278   14400 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002152840}
	I0429 04:47:27.894295   14400 network_create.go:124] attempt to create docker network multinode-888000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 04:47:27.894371   14400 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	W0429 04:47:27.942875   14400 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000 returned with exit code 1
	W0429 04:47:27.942910   14400 network_create.go:149] failed to create docker network multinode-888000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 04:47:27.942929   14400 network_create.go:116] failed to create docker network multinode-888000 192.168.67.0/24, will retry: subnet is taken
	I0429 04:47:27.944528   14400 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:47:27.944912   14400 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002411430}
	I0429 04:47:27.944925   14400 network_create.go:124] attempt to create docker network multinode-888000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 04:47:27.944996   14400 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	I0429 04:47:28.029049   14400 network_create.go:108] docker network multinode-888000 192.168.76.0/24 created
	I0429 04:47:28.029083   14400 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-888000" container
	I0429 04:47:28.029201   14400 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 04:47:28.077991   14400 cli_runner.go:164] Run: docker volume create multinode-888000 --label name.minikube.sigs.k8s.io=multinode-888000 --label created_by.minikube.sigs.k8s.io=true
	I0429 04:47:28.126591   14400 oci.go:103] Successfully created a docker volume multinode-888000
	I0429 04:47:28.126705   14400 cli_runner.go:164] Run: docker run --rm --name multinode-888000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-888000 --entrypoint /usr/bin/test -v multinode-888000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 04:47:28.437318   14400 oci.go:107] Successfully prepared a docker volume multinode-888000
	I0429 04:47:28.437362   14400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:47:28.437374   14400 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 04:47:28.437467   14400 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-888000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 04:53:27.852819   14400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 04:53:27.852935   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:27.904933   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:27.905066   14400 retry.go:31] will retry after 262.2731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:28.169751   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:28.222571   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:28.222686   14400 retry.go:31] will retry after 231.413115ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:28.456446   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:28.511267   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:28.511357   14400 retry.go:31] will retry after 341.683202ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:28.854035   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:28.907882   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:28.907983   14400 retry.go:31] will retry after 525.824704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:29.436176   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:29.486997   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 04:53:29.487098   14400 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 04:53:29.487116   14400 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:29.487172   14400 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 04:53:29.487235   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:29.535842   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:29.535932   14400 retry.go:31] will retry after 364.828025ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:29.902325   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:29.954092   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:29.954189   14400 retry.go:31] will retry after 381.597512ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:30.338231   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:30.390057   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:53:30.390158   14400 retry.go:31] will retry after 591.657993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:30.983479   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:53:31.035287   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 04:53:31.035388   14400 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 04:53:31.035413   14400 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:31.035431   14400 start.go:128] duration metric: took 6m3.225515696s to createHost
	I0429 04:53:31.035438   14400 start.go:83] releasing machines lock for "multinode-888000", held for 6m3.22563081s
	W0429 04:53:31.035454   14400 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 04:53:31.035876   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:31.083877   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:31.083935   14400 delete.go:82] Unable to get host status for multinode-888000, assuming it has already been deleted: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	W0429 04:53:31.084021   14400 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 04:53:31.084029   14400 start.go:728] Will try again in 5 seconds ...
	I0429 04:53:36.086427   14400 start.go:360] acquireMachinesLock for multinode-888000: {Name:mk7ef4e0a331afdc76a7a1515dd33ef411b9e213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 04:53:36.086629   14400 start.go:364] duration metric: took 159.097µs to acquireMachinesLock for "multinode-888000"
	I0429 04:53:36.086670   14400 start.go:96] Skipping create...Using existing machine configuration
	I0429 04:53:36.086688   14400 fix.go:54] fixHost starting: 
	I0429 04:53:36.087082   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:36.138504   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:36.138563   14400 fix.go:112] recreateIfNeeded on multinode-888000: state= err=unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:36.138585   14400 fix.go:117] machineExists: false. err=machine does not exist
	I0429 04:53:36.159430   14400 out.go:177] * docker "multinode-888000" container is missing, will recreate.
	I0429 04:53:36.202262   14400 delete.go:124] DEMOLISHING multinode-888000 ...
	I0429 04:53:36.202486   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:36.252061   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 04:53:36.252115   14400 stop.go:83] unable to get state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:36.252132   14400 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:36.252520   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:36.301004   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:36.301055   14400 delete.go:82] Unable to get host status for multinode-888000, assuming it has already been deleted: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:36.301137   14400 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 04:53:36.349196   14400 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 04:53:36.349237   14400 kic.go:371] could not find the container multinode-888000 to remove it. will try anyways
	I0429 04:53:36.349316   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:36.396666   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 04:53:36.396709   14400 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:36.396795   14400 cli_runner.go:164] Run: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0"
	W0429 04:53:36.445395   14400 cli_runner.go:211] docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 04:53:36.445454   14400 oci.go:650] error shutdown multinode-888000: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:37.446068   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:37.497943   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:37.497991   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:37.498003   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:37.498028   14400 retry.go:31] will retry after 266.914038ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:37.767365   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:37.819656   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:37.819707   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:37.819716   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:37.819741   14400 retry.go:31] will retry after 578.593493ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:38.400693   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:38.453044   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:38.453088   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:38.453097   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:38.453122   14400 retry.go:31] will retry after 698.398694ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:39.153351   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:39.206350   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:39.206395   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:39.206408   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:39.206433   14400 retry.go:31] will retry after 2.037460765s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:41.244440   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:41.296731   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:41.296779   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:41.296788   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:41.296810   14400 retry.go:31] will retry after 2.274565881s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:43.572612   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:43.625152   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:43.625195   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:43.625205   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:43.625226   14400 retry.go:31] will retry after 3.74131122s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:47.366961   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:47.418281   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:47.418326   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:47.418334   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:47.418359   14400 retry.go:31] will retry after 7.162706789s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:54.582881   14400 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 04:53:54.635633   14400 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 04:53:54.635677   14400 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:53:54.635685   14400 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 04:53:54.635713   14400 oci.go:88] couldn't shut down multinode-888000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	 
	I0429 04:53:54.635789   14400 cli_runner.go:164] Run: docker rm -f -v multinode-888000
	I0429 04:53:54.684520   14400 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 04:53:54.732874   14400 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 04:53:54.732994   14400 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 04:53:54.781055   14400 cli_runner.go:164] Run: docker network rm multinode-888000
	I0429 04:53:54.881675   14400 fix.go:124] Sleeping 1 second for extra luck!
	I0429 04:53:55.882225   14400 start.go:125] createHost starting for "" (driver="docker")
	I0429 04:53:55.904161   14400 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 04:53:55.904265   14400 start.go:159] libmachine.API.Create for "multinode-888000" (driver="docker")
	I0429 04:53:55.904286   14400 client.go:168] LocalClient.Create starting
	I0429 04:53:55.904394   14400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 04:53:55.904442   14400 main.go:141] libmachine: Decoding PEM data...
	I0429 04:53:55.904455   14400 main.go:141] libmachine: Parsing certificate...
	I0429 04:53:55.904500   14400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 04:53:55.904535   14400 main.go:141] libmachine: Decoding PEM data...
	I0429 04:53:55.904543   14400 main.go:141] libmachine: Parsing certificate...
	I0429 04:53:55.904920   14400 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 04:53:55.955228   14400 cli_runner.go:211] docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 04:53:55.955321   14400 network_create.go:281] running [docker network inspect multinode-888000] to gather additional debugging logs...
	I0429 04:53:55.955339   14400 cli_runner.go:164] Run: docker network inspect multinode-888000
	W0429 04:53:56.003134   14400 cli_runner.go:211] docker network inspect multinode-888000 returned with exit code 1
	I0429 04:53:56.003173   14400 network_create.go:284] error running [docker network inspect multinode-888000]: docker network inspect multinode-888000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-888000 not found
	I0429 04:53:56.003192   14400 network_create.go:286] output of [docker network inspect multinode-888000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-888000 not found
	
	** /stderr **
	I0429 04:53:56.003321   14400 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 04:53:56.053236   14400 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:53:56.054610   14400 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:53:56.056277   14400 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:53:56.057952   14400 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 04:53:56.058581   14400 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000691700}
	I0429 04:53:56.058600   14400 network_create.go:124] attempt to create docker network multinode-888000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 04:53:56.058698   14400 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	I0429 04:53:56.143103   14400 network_create.go:108] docker network multinode-888000 192.168.85.0/24 created
	I0429 04:53:56.143136   14400 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-888000" container
	I0429 04:53:56.143239   14400 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 04:53:56.191697   14400 cli_runner.go:164] Run: docker volume create multinode-888000 --label name.minikube.sigs.k8s.io=multinode-888000 --label created_by.minikube.sigs.k8s.io=true
	I0429 04:53:56.240787   14400 oci.go:103] Successfully created a docker volume multinode-888000
	I0429 04:53:56.240902   14400 cli_runner.go:164] Run: docker run --rm --name multinode-888000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-888000 --entrypoint /usr/bin/test -v multinode-888000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 04:53:56.482229   14400 oci.go:107] Successfully prepared a docker volume multinode-888000
	I0429 04:53:56.482268   14400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:53:56.482281   14400 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 04:53:56.482377   14400 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-888000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 04:59:55.917563   14400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 04:59:55.917684   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:55.970774   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:55.970888   14400 retry.go:31] will retry after 128.054854ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:56.101371   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:56.150550   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:56.150655   14400 retry.go:31] will retry after 233.457205ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:56.384371   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:56.443295   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:56.443406   14400 retry.go:31] will retry after 751.226686ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:57.197102   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:57.249465   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 04:59:57.249569   14400 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 04:59:57.249587   14400 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:57.249643   14400 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 04:59:57.249694   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:57.340253   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:57.340344   14400 retry.go:31] will retry after 207.286138ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:57.549900   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:57.609502   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:57.609601   14400 retry.go:31] will retry after 199.687429ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:57.811713   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:57.861988   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:57.862091   14400 retry.go:31] will retry after 354.943999ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:58.217427   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:58.268834   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:58.268927   14400 retry.go:31] will retry after 857.885682ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:59.128360   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:59.179251   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 04:59:59.179358   14400 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 04:59:59.179375   14400 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:59.179389   14400 start.go:128] duration metric: took 6m3.286200001s to createHost
	I0429 04:59:59.179464   14400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 04:59:59.179516   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:59.227368   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:59.227455   14400 retry.go:31] will retry after 317.824141ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:59.547673   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:59.598731   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:59.598831   14400 retry.go:31] will retry after 248.715992ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 04:59:59.849956   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 04:59:59.902117   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 04:59:59.902221   14400 retry.go:31] will retry after 741.930893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:00.646541   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:00:00.697958   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:00:00.698069   14400 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:00:00.698084   14400 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:00.698159   14400 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:00:00.698213   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:00:00.747138   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:00:00.747228   14400 retry.go:31] will retry after 157.928111ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:00.906003   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:00:00.958257   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:00:00.958355   14400 retry.go:31] will retry after 347.761404ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:01.308514   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:00:01.361249   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:00:01.361352   14400 retry.go:31] will retry after 390.208503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:01.753970   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:00:01.804829   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:00:01.804928   14400 retry.go:31] will retry after 818.025297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:02.624661   14400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:00:02.677645   14400 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:00:02.677752   14400 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:00:02.677770   14400 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:00:02.677780   14400 fix.go:56] duration metric: took 6m26.579499879s for fixHost
	I0429 05:00:02.677787   14400 start.go:83] releasing machines lock for "multinode-888000", held for 6m26.579549515s
	W0429 05:00:02.677862   14400 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-888000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-888000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 05:00:02.721034   14400 out.go:177] 
	W0429 05:00:02.741877   14400 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 05:00:02.741921   14400 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 05:00:02.741946   14400 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 05:00:02.762857   14400 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-888000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (111.169106ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:00:02.981221   14767 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (756.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (112.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (105.872449ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-888000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- rollout status deployment/busybox: exit status 1 (106.387166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.826235ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.336492ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.2353ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (115.98155ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.27031ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.699665ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.090233ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.654269ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.010304ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.316212ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.852425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (106.828714ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- exec  -- nslookup kubernetes.io: exit status 1 (107.374491ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- exec  -- nslookup kubernetes.default: exit status 1 (108.660267ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (107.307228ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (112.975477ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:55.154360   14855 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (112.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-888000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (105.81832ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-888000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (111.787815ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:55.423901   14864 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-888000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-888000 -v 3 --alsologtostderr: exit status 80 (201.867949ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:01:55.487159   14868 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:01:55.487431   14868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:55.487442   14868 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:55.487446   14868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:55.487606   14868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:01:55.487943   14868 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:01:55.488212   14868 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:01:55.488588   14868 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:01:55.536807   14868 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:01:55.560291   14868 out.go:177] 
	W0429 05:01:55.581491   14868 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-888000 host status: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-888000 host status: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	I0429 05:01:55.603287   14868 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-888000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (112.450326ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:55.790617   14874 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-888000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-888000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.043284ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-888000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-888000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-888000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (113.102681ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:55.991830   14881 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-888000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-750000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-888000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-888000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-888000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (111.944147ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:56.340388   14893 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status --output json --alsologtostderr: exit status 7 (111.530426ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-888000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:01:56.403091   14897 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:01:56.403375   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:56.403380   14897 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:56.403384   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:56.403562   14897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:01:56.403752   14897 out.go:298] Setting JSON to true
	I0429 05:01:56.403775   14897 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:01:56.403814   14897 notify.go:220] Checking for updates...
	I0429 05:01:56.404082   14897 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:01:56.404102   14897 status.go:255] checking status of multinode-888000 ...
	I0429 05:01:56.404480   14897 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:01:56.452013   14897 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:01:56.452063   14897 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:01:56.452080   14897 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:01:56.452101   14897 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:01:56.452109   14897 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-888000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (113.414162ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:56.617547   14903 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 node stop m03: exit status 85 (157.937236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-888000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status: exit status 7 (111.882838ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:56.888124   14909 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:01:56.888137   14909 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr: exit status 7 (112.188609ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:01:56.951304   14913 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:01:56.951484   14913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:56.951490   14913 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:56.951493   14913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:56.951674   14913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:01:56.951851   14913 out.go:298] Setting JSON to false
	I0429 05:01:56.951873   14913 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:01:56.951910   14913 notify.go:220] Checking for updates...
	I0429 05:01:56.952147   14913 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:01:56.952161   14913 status.go:255] checking status of multinode-888000 ...
	I0429 05:01:56.952552   14913 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:01:57.000328   14913 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:01:57.000380   14913 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:01:57.000410   14913 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:01:57.000430   14913 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:01:57.000438   14913 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr": multinode-888000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr": multinode-888000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr": multinode-888000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (112.808266ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:01:57.165053   14919 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 node start m03 -v=7 --alsologtostderr: exit status 85 (152.419937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:01:57.227822   14923 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:01:57.228039   14923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:57.228045   14923 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:57.228049   14923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:57.228216   14923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:01:57.228585   14923 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:01:57.228860   14923 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:01:57.250159   14923 out.go:177] 
	W0429 05:01:57.271789   14923 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0429 05:01:57.271804   14923 out.go:239] * 
	* 
	W0429 05:01:57.274912   14923 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 05:01:57.295685   14923 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0429 05:01:57.227822   14923 out.go:291] Setting OutFile to fd 1 ...
I0429 05:01:57.228039   14923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 05:01:57.228045   14923 out.go:304] Setting ErrFile to fd 2...
I0429 05:01:57.228049   14923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 05:01:57.228216   14923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
I0429 05:01:57.228585   14923 mustload.go:65] Loading cluster: multinode-888000
I0429 05:01:57.228860   14923 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 05:01:57.250159   14923 out.go:177] 
W0429 05:01:57.271789   14923 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0429 05:01:57.271804   14923 out.go:239] * 
* 
W0429 05:01:57.274912   14923 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 05:01:57.295685   14923 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-888000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (113.342537ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:01:57.380382   14925 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:01:57.380569   14925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:57.380574   14925 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:57.380578   14925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:57.380766   14925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:01:57.380946   14925 out.go:298] Setting JSON to false
	I0429 05:01:57.380968   14925 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:01:57.381012   14925 notify.go:220] Checking for updates...
	I0429 05:01:57.382295   14925 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:01:57.382311   14925 status.go:255] checking status of multinode-888000 ...
	I0429 05:01:57.382707   14925 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:01:57.431153   14925 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:01:57.431214   14925 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:01:57.431231   14925 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:01:57.431251   14925 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:01:57.431258   14925 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (119.453072ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:01:58.331888   14929 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:01:58.332093   14929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:58.332099   14929 out.go:304] Setting ErrFile to fd 2...
	I0429 05:01:58.332102   14929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:01:58.332294   14929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:01:58.332474   14929 out.go:298] Setting JSON to false
	I0429 05:01:58.332502   14929 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:01:58.332539   14929 notify.go:220] Checking for updates...
	I0429 05:01:58.333708   14929 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:01:58.333846   14929 status.go:255] checking status of multinode-888000 ...
	I0429 05:01:58.334221   14929 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:01:58.384455   14929 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:01:58.384519   14929 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:01:58.384541   14929 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:01:58.384557   14929 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:01:58.384566   14929 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (122.255604ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:00.619656   14934 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:00.619934   14934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:00.619940   14934 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:00.619944   14934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:00.620134   14934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:00.620313   14934 out.go:298] Setting JSON to false
	I0429 05:02:00.620338   14934 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:00.620373   14934 notify.go:220] Checking for updates...
	I0429 05:02:00.620630   14934 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:00.620644   14934 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:00.621019   14934 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:00.672272   14934 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:00.672336   14934 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:00.672355   14934 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:00.672372   14934 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:00.672382   14934 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (115.113767ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:02.891729   14938 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:02.891993   14938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:02.891999   14938 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:02.892003   14938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:02.892192   14938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:02.892370   14938 out.go:298] Setting JSON to false
	I0429 05:02:02.892391   14938 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:02.892440   14938 notify.go:220] Checking for updates...
	I0429 05:02:02.893315   14938 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:02.893377   14938 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:02.894110   14938 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:02.941592   14938 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:02.941654   14938 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:02.941673   14938 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:02.941694   14938 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:02.941701   14938 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (118.166441ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:06.061301   14942 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:06.061506   14942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:06.061512   14942 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:06.061515   14942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:06.061690   14942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:06.061860   14942 out.go:298] Setting JSON to false
	I0429 05:02:06.061883   14942 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:06.061918   14942 notify.go:220] Checking for updates...
	I0429 05:02:06.062151   14942 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:06.062166   14942 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:06.062542   14942 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:06.110296   14942 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:06.110367   14942 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:06.110384   14942 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:06.110405   14942 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:06.110412   14942 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (115.773306ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:11.671186   14947 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:11.671481   14947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:11.671486   14947 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:11.671490   14947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:11.671667   14947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:11.671851   14947 out.go:298] Setting JSON to false
	I0429 05:02:11.671873   14947 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:11.671925   14947 notify.go:220] Checking for updates...
	I0429 05:02:11.672152   14947 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:11.672166   14947 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:11.673387   14947 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:11.721502   14947 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:11.721558   14947 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:11.721581   14947 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:11.721600   14947 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:11.721607   14947 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (136.702229ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:17.303965   14951 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:17.320113   14951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:17.320128   14951 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:17.320136   14951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:17.320501   14951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:17.320883   14951 out.go:298] Setting JSON to false
	I0429 05:02:17.320939   14951 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:17.321012   14951 notify.go:220] Checking for updates...
	I0429 05:02:17.323026   14951 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:17.323057   14951 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:17.323601   14951 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:17.374374   14951 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:17.374443   14951 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:17.374465   14951 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:17.374481   14951 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:17.374489   14951 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
E0429 05:02:18.556238    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (115.076273ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:32.989160   14959 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:32.989442   14959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:32.989450   14959 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:32.989466   14959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:32.989644   14959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:32.989829   14959 out.go:298] Setting JSON to false
	I0429 05:02:32.989858   14959 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:32.989894   14959 notify.go:220] Checking for updates...
	I0429 05:02:32.991236   14959 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:32.991253   14959 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:32.991636   14959 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:33.039711   14959 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:33.039774   14959 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:33.039793   14959 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:33.039814   14959 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:33.039822   14959 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
E0429 05:02:35.506222    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr: exit status 7 (118.175742ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:02:49.620052   14968 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:02:49.620335   14968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:49.620340   14968 out.go:304] Setting ErrFile to fd 2...
	I0429 05:02:49.620344   14968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:02:49.620514   14968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:02:49.620678   14968 out.go:298] Setting JSON to false
	I0429 05:02:49.620700   14968 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:02:49.620739   14968 notify.go:220] Checking for updates...
	I0429 05:02:49.620959   14968 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:02:49.620977   14968 status.go:255] checking status of multinode-888000 ...
	I0429 05:02:49.621372   14968 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:02:49.672108   14968 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:02:49.672185   14968 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:02:49.672202   14968 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:02:49.672222   14968 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:02:49.672229   14968 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-888000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "ce1a43ab6d83f202747f9d30a31ce6b85638c8ee0a4f44ce1be8d5249834e8f0",
	        "Created": "2024-04-29T11:53:56.103384868Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (112.528645ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:02:49.836058   14974 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (785.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-888000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-888000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-888000: exit status 82 (15.508345972s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-888000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-888000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-888000 --wait=true -v=8 --alsologtostderr
E0429 05:03:20.409410    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:07:35.526126    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:08:03.472665    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:08:20.427619    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:12:35.535142    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:13:20.438270    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-888000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m50.171193841s)

                                                
                                                
-- stdout --
	* [multinode-888000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-888000" primary control-plane node in "multinode-888000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* docker "multinode-888000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-888000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:03:05.474316   14995 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:03:05.474493   14995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:05.474498   14995 out.go:304] Setting ErrFile to fd 2...
	I0429 05:03:05.474502   14995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:03:05.474676   14995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:03:05.476058   14995 out.go:298] Setting JSON to false
	I0429 05:03:05.498045   14995 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5555,"bootTime":1714386630,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 05:03:05.498144   14995 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:03:05.519800   14995 out.go:177] * [multinode-888000] minikube v1.33.0 on Darwin 14.4.1
	I0429 05:03:05.562521   14995 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 05:03:05.584620   14995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 05:03:05.562569   14995 notify.go:220] Checking for updates...
	I0429 05:03:05.606362   14995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 05:03:05.627247   14995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:03:05.648552   14995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 05:03:05.670643   14995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:03:05.692962   14995 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:03:05.693134   14995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:03:05.747911   14995 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 05:03:05.748057   14995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:03:05.854038   14995 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-29 12:03:05.842298245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:03:05.896395   14995 out.go:177] * Using the docker driver based on existing profile
	I0429 05:03:05.916957   14995 start.go:297] selected driver: docker
	I0429 05:03:05.916991   14995 start.go:901] validating driver "docker" against &{Name:multinode-888000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-888000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:03:05.917155   14995 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:03:05.917361   14995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:03:06.028190   14995 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-29 12:03:06.016655719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:03:06.031275   14995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:03:06.031345   14995 cni.go:84] Creating CNI manager for ""
	I0429 05:03:06.031355   14995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 05:03:06.031419   14995 start.go:340] cluster config:
	{Name:multinode-888000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-888000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:03:06.075042   14995 out.go:177] * Starting "multinode-888000" primary control-plane node in "multinode-888000" cluster
	I0429 05:03:06.095983   14995 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 05:03:06.117009   14995 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 05:03:06.158892   14995 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:03:06.158967   14995 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 05:03:06.158985   14995 cache.go:56] Caching tarball of preloaded images
	I0429 05:03:06.158974   14995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 05:03:06.159207   14995 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 05:03:06.159227   14995 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:03:06.159376   14995 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/multinode-888000/config.json ...
	I0429 05:03:06.211437   14995 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 05:03:06.211481   14995 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 05:03:06.211500   14995 cache.go:194] Successfully downloaded all kic artifacts
	I0429 05:03:06.211541   14995 start.go:360] acquireMachinesLock for multinode-888000: {Name:mk7ef4e0a331afdc76a7a1515dd33ef411b9e213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:03:06.211638   14995 start.go:364] duration metric: took 80.053µs to acquireMachinesLock for "multinode-888000"
	I0429 05:03:06.211664   14995 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:03:06.211675   14995 fix.go:54] fixHost starting: 
	I0429 05:03:06.211898   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:06.261567   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:06.261627   14995 fix.go:112] recreateIfNeeded on multinode-888000: state= err=unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:06.261649   14995 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:03:06.283375   14995 out.go:177] * docker "multinode-888000" container is missing, will recreate.
	I0429 05:03:06.325147   14995 delete.go:124] DEMOLISHING multinode-888000 ...
	I0429 05:03:06.325325   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:06.374916   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 05:03:06.374987   14995 stop.go:83] unable to get state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:06.375004   14995 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:06.375374   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:06.423143   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:06.423197   14995 delete.go:82] Unable to get host status for multinode-888000, assuming it has already been deleted: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:06.423269   14995 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 05:03:06.471886   14995 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 05:03:06.471919   14995 kic.go:371] could not find the container multinode-888000 to remove it. will try anyways
	I0429 05:03:06.471991   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:06.518992   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 05:03:06.519039   14995 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:06.519124   14995 cli_runner.go:164] Run: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0"
	W0429 05:03:06.567445   14995 cli_runner.go:211] docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:03:06.567476   14995 oci.go:650] error shutdown multinode-888000: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:07.568568   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:07.621414   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:07.621458   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:07.621472   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:07.621506   14995 retry.go:31] will retry after 450.304654ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:08.073962   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:08.126910   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:08.126953   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:08.126977   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:08.127001   14995 retry.go:31] will retry after 853.883053ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:08.981267   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:09.033218   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:09.033264   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:09.033291   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:09.033315   14995 retry.go:31] will retry after 964.479591ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:10.000202   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:10.054007   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:10.054049   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:10.054059   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:10.054084   14995 retry.go:31] will retry after 1.847742497s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:11.904258   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:11.955361   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:11.955405   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:11.955416   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:11.955453   14995 retry.go:31] will retry after 3.500456874s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:15.458507   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:15.508376   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:15.508414   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:15.508428   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:15.508451   14995 retry.go:31] will retry after 2.644879448s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:18.154735   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:18.209426   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:18.209474   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:18.209483   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:18.209506   14995 retry.go:31] will retry after 3.145197758s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:21.357105   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:03:21.407138   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:03:21.407183   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:03:21.407196   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:03:21.407227   14995 oci.go:88] couldn't shut down multinode-888000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	 
	I0429 05:03:21.407297   14995 cli_runner.go:164] Run: docker rm -f -v multinode-888000
	I0429 05:03:21.456912   14995 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 05:03:21.504973   14995 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 05:03:21.505080   14995 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:03:21.552504   14995 cli_runner.go:164] Run: docker network rm multinode-888000
	I0429 05:03:21.658290   14995 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:03:22.660031   14995 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:03:22.682071   14995 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 05:03:22.682250   14995 start.go:159] libmachine.API.Create for "multinode-888000" (driver="docker")
	I0429 05:03:22.682297   14995 client.go:168] LocalClient.Create starting
	I0429 05:03:22.682515   14995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:03:22.682611   14995 main.go:141] libmachine: Decoding PEM data...
	I0429 05:03:22.682648   14995 main.go:141] libmachine: Parsing certificate...
	I0429 05:03:22.682746   14995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:03:22.682826   14995 main.go:141] libmachine: Decoding PEM data...
	I0429 05:03:22.682842   14995 main.go:141] libmachine: Parsing certificate...
	I0429 05:03:22.683633   14995 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:03:22.734444   14995 cli_runner.go:211] docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:03:22.734533   14995 network_create.go:281] running [docker network inspect multinode-888000] to gather additional debugging logs...
	I0429 05:03:22.734553   14995 cli_runner.go:164] Run: docker network inspect multinode-888000
	W0429 05:03:22.785667   14995 cli_runner.go:211] docker network inspect multinode-888000 returned with exit code 1
	I0429 05:03:22.785699   14995 network_create.go:284] error running [docker network inspect multinode-888000]: docker network inspect multinode-888000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-888000 not found
	I0429 05:03:22.785709   14995 network_create.go:286] output of [docker network inspect multinode-888000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-888000 not found
	
	** /stderr **
	I0429 05:03:22.785843   14995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:03:22.835791   14995 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:03:22.837425   14995 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:03:22.837781   14995 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00222f9a0}
	I0429 05:03:22.837800   14995 network_create.go:124] attempt to create docker network multinode-888000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 05:03:22.837874   14995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	W0429 05:03:22.885781   14995 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000 returned with exit code 1
	W0429 05:03:22.885815   14995 network_create.go:149] failed to create docker network multinode-888000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 05:03:22.885831   14995 network_create.go:116] failed to create docker network multinode-888000 192.168.67.0/24, will retry: subnet is taken
	I0429 05:03:22.887429   14995 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:03:22.887798   14995 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023f8a60}
	I0429 05:03:22.887809   14995 network_create.go:124] attempt to create docker network multinode-888000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 05:03:22.887878   14995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	I0429 05:03:22.970899   14995 network_create.go:108] docker network multinode-888000 192.168.76.0/24 created
	I0429 05:03:22.970936   14995 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-888000" container
	I0429 05:03:22.971036   14995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:03:23.020139   14995 cli_runner.go:164] Run: docker volume create multinode-888000 --label name.minikube.sigs.k8s.io=multinode-888000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:03:23.069925   14995 oci.go:103] Successfully created a docker volume multinode-888000
	I0429 05:03:23.070028   14995 cli_runner.go:164] Run: docker run --rm --name multinode-888000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-888000 --entrypoint /usr/bin/test -v multinode-888000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:03:23.319607   14995 oci.go:107] Successfully prepared a docker volume multinode-888000
	I0429 05:03:23.319650   14995 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:03:23.319662   14995 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:03:23.319775   14995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-888000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:09:22.704588   14995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:09:22.704724   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:22.755320   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:22.755444   14995 retry.go:31] will retry after 353.657397ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:23.111422   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:23.162743   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:23.162852   14995 retry.go:31] will retry after 432.72233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:23.596262   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:23.647013   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:23.647119   14995 retry.go:31] will retry after 285.705706ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:23.935240   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:23.986872   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:09:23.986991   14995 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:09:23.987009   14995 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:23.987083   14995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:09:23.987149   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:24.033970   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:24.034098   14995 retry.go:31] will retry after 334.287771ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:24.369001   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:24.420767   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:24.420860   14995 retry.go:31] will retry after 545.563841ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:24.968794   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:25.019005   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:25.019099   14995 retry.go:31] will retry after 843.385665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:25.862966   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:25.912485   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:09:25.912600   14995 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:09:25.912616   14995 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:25.912632   14995 start.go:128] duration metric: took 6m3.230612406s to createHost
	I0429 05:09:25.912706   14995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:09:25.912760   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:25.960321   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:25.960414   14995 retry.go:31] will retry after 350.458511ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:26.311871   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:26.363514   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:26.363606   14995 retry.go:31] will retry after 218.98425ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:26.582949   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:26.632737   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:26.632831   14995 retry.go:31] will retry after 363.797364ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:26.999083   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:27.052092   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:09:27.052188   14995 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:09:27.052209   14995 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:27.052273   14995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:09:27.052330   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:27.101596   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:27.101688   14995 retry.go:31] will retry after 218.158502ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:27.322195   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:27.374889   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:27.374976   14995 retry.go:31] will retry after 369.321322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:27.744972   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:27.797653   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:09:27.797749   14995 retry.go:31] will retry after 732.197352ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:28.530488   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:09:28.584207   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:09:28.584320   14995 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:09:28.584337   14995 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:28.584353   14995 fix.go:56] duration metric: took 6m22.350235785s for fixHost
	I0429 05:09:28.584361   14995 start.go:83] releasing machines lock for "multinode-888000", held for 6m22.350268927s
	W0429 05:09:28.584378   14995 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 05:09:28.584444   14995 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 05:09:28.584450   14995 start.go:728] Will try again in 5 seconds ...
	I0429 05:09:33.585378   14995 start.go:360] acquireMachinesLock for multinode-888000: {Name:mk7ef4e0a331afdc76a7a1515dd33ef411b9e213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:09:33.585576   14995 start.go:364] duration metric: took 151.091µs to acquireMachinesLock for "multinode-888000"
	I0429 05:09:33.585609   14995 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:09:33.585620   14995 fix.go:54] fixHost starting: 
	I0429 05:09:33.586082   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:33.636255   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:33.636295   14995 fix.go:112] recreateIfNeeded on multinode-888000: state= err=unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:33.636311   14995 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:09:33.658069   14995 out.go:177] * docker "multinode-888000" container is missing, will recreate.
	I0429 05:09:33.700766   14995 delete.go:124] DEMOLISHING multinode-888000 ...
	I0429 05:09:33.700994   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:33.750401   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 05:09:33.750445   14995 stop.go:83] unable to get state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:33.750468   14995 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:33.750838   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:33.798729   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:33.798787   14995 delete.go:82] Unable to get host status for multinode-888000, assuming it has already been deleted: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:33.798869   14995 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 05:09:33.846272   14995 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 05:09:33.846301   14995 kic.go:371] could not find the container multinode-888000 to remove it. will try anyways
	I0429 05:09:33.846373   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:33.894182   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 05:09:33.894226   14995 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:33.894310   14995 cli_runner.go:164] Run: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0"
	W0429 05:09:33.942708   14995 cli_runner.go:211] docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:09:33.942742   14995 oci.go:650] error shutdown multinode-888000: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:34.945171   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:34.996318   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:34.996363   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:34.996374   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:34.996395   14995 retry.go:31] will retry after 737.441647ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:35.735631   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:35.787753   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:35.787794   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:35.787804   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:35.787828   14995 retry.go:31] will retry after 440.731716ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:36.229082   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:36.282203   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:36.282248   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:36.282256   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:36.282278   14995 retry.go:31] will retry after 721.043696ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:37.005033   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:37.059193   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:37.059247   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:37.059257   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:37.059278   14995 retry.go:31] will retry after 953.541362ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:38.014063   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:38.063873   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:38.063918   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:38.063931   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:38.063955   14995 retry.go:31] will retry after 2.371236676s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:40.435699   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:40.488595   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:40.488637   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:40.488645   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:40.488668   14995 retry.go:31] will retry after 2.173600641s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:42.663347   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:42.714491   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:42.714540   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:42.714550   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:42.714573   14995 retry.go:31] will retry after 5.007818456s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:47.724978   14995 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:09:47.778989   14995 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:09:47.779034   14995 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:09:47.779042   14995 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:09:47.779071   14995 oci.go:88] couldn't shut down multinode-888000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	 
	I0429 05:09:47.779142   14995 cli_runner.go:164] Run: docker rm -f -v multinode-888000
	I0429 05:09:47.828070   14995 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 05:09:47.875556   14995 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 05:09:47.875663   14995 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:09:47.923508   14995 cli_runner.go:164] Run: docker network rm multinode-888000
	I0429 05:09:48.029545   14995 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:09:49.030011   14995 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:09:49.052072   14995 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 05:09:49.052249   14995 start.go:159] libmachine.API.Create for "multinode-888000" (driver="docker")
	I0429 05:09:49.052279   14995 client.go:168] LocalClient.Create starting
	I0429 05:09:49.052503   14995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:09:49.052598   14995 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:49.052625   14995 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:49.052704   14995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:09:49.052784   14995 main.go:141] libmachine: Decoding PEM data...
	I0429 05:09:49.052799   14995 main.go:141] libmachine: Parsing certificate...
	I0429 05:09:49.074040   14995 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:09:49.124224   14995 cli_runner.go:211] docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:09:49.124311   14995 network_create.go:281] running [docker network inspect multinode-888000] to gather additional debugging logs...
	I0429 05:09:49.124327   14995 cli_runner.go:164] Run: docker network inspect multinode-888000
	W0429 05:09:49.172703   14995 cli_runner.go:211] docker network inspect multinode-888000 returned with exit code 1
	I0429 05:09:49.172737   14995 network_create.go:284] error running [docker network inspect multinode-888000]: docker network inspect multinode-888000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-888000 not found
	I0429 05:09:49.172750   14995 network_create.go:286] output of [docker network inspect multinode-888000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-888000 not found
	
	** /stderr **
	I0429 05:09:49.172896   14995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:09:49.222931   14995 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:09:49.224494   14995 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:09:49.225948   14995 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:09:49.227502   14995 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:09:49.227831   14995 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002463920}
	I0429 05:09:49.227844   14995 network_create.go:124] attempt to create docker network multinode-888000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 05:09:49.227908   14995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	I0429 05:09:49.311902   14995 network_create.go:108] docker network multinode-888000 192.168.85.0/24 created
	I0429 05:09:49.311932   14995 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-888000" container
	I0429 05:09:49.312035   14995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:09:49.361338   14995 cli_runner.go:164] Run: docker volume create multinode-888000 --label name.minikube.sigs.k8s.io=multinode-888000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:09:49.409184   14995 oci.go:103] Successfully created a docker volume multinode-888000
	I0429 05:09:49.409308   14995 cli_runner.go:164] Run: docker run --rm --name multinode-888000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-888000 --entrypoint /usr/bin/test -v multinode-888000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:09:49.656812   14995 oci.go:107] Successfully prepared a docker volume multinode-888000
	I0429 05:09:49.656852   14995 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:09:49.656866   14995 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:09:49.656961   14995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-888000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 05:15:49.063393   14995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:15:49.063519   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:49.113185   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:49.113306   14995 retry.go:31] will retry after 235.147499ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:49.349266   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:49.400733   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:49.400850   14995 retry.go:31] will retry after 285.696596ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:49.687850   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:49.738037   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:49.738148   14995 retry.go:31] will retry after 339.933691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:50.080523   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:50.130538   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:15:50.130643   14995 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:15:50.130671   14995 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:50.130728   14995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:15:50.130780   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:50.178121   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:50.178217   14995 retry.go:31] will retry after 152.582528ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:50.331467   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:50.384277   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:50.384388   14995 retry.go:31] will retry after 536.226499ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:50.921653   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:50.972412   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:50.972508   14995 retry.go:31] will retry after 674.072088ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:51.648985   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:51.700271   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:15:51.700378   14995 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:15:51.700395   14995 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:51.700407   14995 start.go:128] duration metric: took 6m2.659497391s to createHost
	I0429 05:15:51.700480   14995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 05:15:51.700535   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:51.748808   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:51.748900   14995 retry.go:31] will retry after 365.471727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:52.115113   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:52.167724   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:52.167824   14995 retry.go:31] will retry after 417.218359ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:52.585310   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:52.635404   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:52.635503   14995 retry.go:31] will retry after 748.027106ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:53.383864   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:53.434545   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:15:53.434657   14995 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:15:53.434684   14995 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:53.434742   14995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 05:15:53.434802   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:53.482709   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:53.482800   14995 retry.go:31] will retry after 174.310316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:53.659519   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:53.710775   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:53.710879   14995 retry.go:31] will retry after 401.747898ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:54.113216   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:54.164783   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:54.164877   14995 retry.go:31] will retry after 721.943648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:54.888623   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:54.937866   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	I0429 05:15:54.937961   14995 retry.go:31] will retry after 474.612269ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:55.414966   14995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000
	W0429 05:15:55.467928   14995 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000 returned with exit code 1
	W0429 05:15:55.468032   14995 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	W0429 05:15:55.468055   14995 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-888000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-888000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:15:55.468065   14995 fix.go:56] duration metric: took 6m21.870990907s for fixHost
	I0429 05:15:55.468071   14995 start.go:83] releasing machines lock for "multinode-888000", held for 6m21.871028725s
	W0429 05:15:55.468148   14995 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-888000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-888000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 05:15:55.511479   14995 out.go:177] 
	W0429 05:15:55.532642   14995 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 05:15:55.532693   14995 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 05:15:55.532722   14995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 05:15:55.553351   14995 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-888000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-888000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "d9bf75eeb510366a9314ca6f7bd3105113c31aae8737e0780ed58119dffa3625",
	        "Created": "2024-04-29T12:09:49.272132103Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (112.942468ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:15:55.859801   15400 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (785.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 node delete m03: exit status 80 (199.790257ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-888000 host status: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-888000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr: exit status 7 (113.405032ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:15:56.122681   15408 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:15:56.123420   15408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:15:56.123432   15408 out.go:304] Setting ErrFile to fd 2...
	I0429 05:15:56.123446   15408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:15:56.123856   15408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:15:56.124152   15408 out.go:298] Setting JSON to false
	I0429 05:15:56.124177   15408 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:15:56.124224   15408 notify.go:220] Checking for updates...
	I0429 05:15:56.124438   15408 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:15:56.124453   15408 status.go:255] checking status of multinode-888000 ...
	I0429 05:15:56.124830   15408 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:15:56.173231   15408 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:15:56.173290   15408 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:15:56.173312   15408 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:15:56.173334   15408 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:15:56.173341   15408 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "d9bf75eeb510366a9314ca6f7bd3105113c31aae8737e0780ed58119dffa3625",
	        "Created": "2024-04-29T12:09:49.272132103Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (113.13331ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:15:56.338577   15414 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 stop: exit status 82 (16.210206331s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	* Stopping node "multinode-888000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-888000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-888000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status: exit status 7 (112.796156ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:16:12.662402   15433 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:16:12.662421   15433 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr: exit status 7 (112.268783ms)

                                                
                                                
-- stdout --
	multinode-888000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:16:12.725514   15437 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:16:12.725718   15437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:16:12.725723   15437 out.go:304] Setting ErrFile to fd 2...
	I0429 05:16:12.725726   15437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:16:12.725901   15437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:16:12.726087   15437 out.go:298] Setting JSON to false
	I0429 05:16:12.726107   15437 mustload.go:65] Loading cluster: multinode-888000
	I0429 05:16:12.726152   15437 notify.go:220] Checking for updates...
	I0429 05:16:12.726380   15437 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:16:12.726394   15437 status.go:255] checking status of multinode-888000 ...
	I0429 05:16:12.726785   15437 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:12.774761   15437 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:12.774813   15437 status.go:330] multinode-888000 host status = "" (err=state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	)
	I0429 05:16:12.774829   15437 status.go:257] multinode-888000 status: &{Name:multinode-888000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 05:16:12.774844   15437 status.go:260] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	E0429 05:16:12.774854   15437 status.go:263] The "multinode-888000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr": multinode-888000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-888000 status --alsologtostderr": multinode-888000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "d9bf75eeb510366a9314ca6f7bd3105113c31aae8737e0780ed58119dffa3625",
	        "Created": "2024-04-29T12:09:49.272132103Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (112.933798ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:16:12.939612   15443 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (16.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (74.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-888000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-888000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m14.061546285s)

                                                
                                                
-- stdout --
	* [multinode-888000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-888000" primary control-plane node in "multinode-888000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* docker "multinode-888000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 05:16:13.002444   15447 out.go:291] Setting OutFile to fd 1 ...
	I0429 05:16:13.002610   15447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:16:13.002616   15447 out.go:304] Setting ErrFile to fd 2...
	I0429 05:16:13.002619   15447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 05:16:13.002793   15447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 05:16:13.004137   15447 out.go:298] Setting JSON to false
	I0429 05:16:13.026358   15447 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6343,"bootTime":1714386630,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 05:16:13.026458   15447 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 05:16:13.048701   15447 out.go:177] * [multinode-888000] minikube v1.33.0 on Darwin 14.4.1
	I0429 05:16:13.090206   15447 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 05:16:13.090245   15447 notify.go:220] Checking for updates...
	I0429 05:16:13.112369   15447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 05:16:13.133244   15447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 05:16:13.154036   15447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 05:16:13.175247   15447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 05:16:13.197327   15447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 05:16:13.219007   15447 config.go:182] Loaded profile config "multinode-888000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 05:16:13.219791   15447 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 05:16:13.275095   15447 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 05:16:13.275307   15447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:16:13.383801   15447 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-29 12:16:13.372148234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:16:13.426358   15447 out.go:177] * Using the docker driver based on existing profile
	I0429 05:16:13.447591   15447 start.go:297] selected driver: docker
	I0429 05:16:13.447630   15447 start.go:901] validating driver "docker" against &{Name:multinode-888000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-888000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:16:13.447734   15447 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 05:16:13.447935   15447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 05:16:13.557040   15447 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-29 12:16:13.545155399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 05:16:13.560060   15447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 05:16:13.560130   15447 cni.go:84] Creating CNI manager for ""
	I0429 05:16:13.560140   15447 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 05:16:13.560214   15447 start.go:340] cluster config:
	{Name:multinode-888000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-888000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 05:16:13.603495   15447 out.go:177] * Starting "multinode-888000" primary control-plane node in "multinode-888000" cluster
	I0429 05:16:13.624702   15447 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 05:16:13.645650   15447 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 05:16:13.687687   15447 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:16:13.687746   15447 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 05:16:13.687766   15447 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 05:16:13.687784   15447 cache.go:56] Caching tarball of preloaded images
	I0429 05:16:13.688032   15447 preload.go:173] Found /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 05:16:13.688057   15447 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 05:16:13.689018   15447 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/multinode-888000/config.json ...
	I0429 05:16:13.738341   15447 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 05:16:13.738373   15447 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 05:16:13.738392   15447 cache.go:194] Successfully downloaded all kic artifacts
	I0429 05:16:13.738442   15447 start.go:360] acquireMachinesLock for multinode-888000: {Name:mk7ef4e0a331afdc76a7a1515dd33ef411b9e213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 05:16:13.738533   15447 start.go:364] duration metric: took 72.166µs to acquireMachinesLock for "multinode-888000"
	I0429 05:16:13.738559   15447 start.go:96] Skipping create...Using existing machine configuration
	I0429 05:16:13.738571   15447 fix.go:54] fixHost starting: 
	I0429 05:16:13.738826   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:13.787738   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:13.787798   15447 fix.go:112] recreateIfNeeded on multinode-888000: state= err=unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:13.787822   15447 fix.go:117] machineExists: false. err=machine does not exist
	I0429 05:16:13.809589   15447 out.go:177] * docker "multinode-888000" container is missing, will recreate.
	I0429 05:16:13.851421   15447 delete.go:124] DEMOLISHING multinode-888000 ...
	I0429 05:16:13.851592   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:13.901817   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 05:16:13.901874   15447 stop.go:83] unable to get state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:13.901897   15447 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:13.902252   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:13.950219   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:13.950275   15447 delete.go:82] Unable to get host status for multinode-888000, assuming it has already been deleted: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:13.950348   15447 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 05:16:14.000104   15447 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 05:16:14.000136   15447 kic.go:371] could not find the container multinode-888000 to remove it. will try anyways
	I0429 05:16:14.000225   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:14.048269   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	W0429 05:16:14.048314   15447 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:14.048392   15447 cli_runner.go:164] Run: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0"
	W0429 05:16:14.096037   15447 cli_runner.go:211] docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 05:16:14.096070   15447 oci.go:650] error shutdown multinode-888000: docker exec --privileged -t multinode-888000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:15.097046   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:15.147561   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:15.147603   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:15.147611   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:15.147647   15447 retry.go:31] will retry after 355.347219ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:15.503928   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:15.554748   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:15.554799   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:15.554810   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:15.554831   15447 retry.go:31] will retry after 941.509827ms: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:16.496692   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:16.546092   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:16.546135   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:16.546148   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:16.546175   15447 retry.go:31] will retry after 1.044493152s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:17.591560   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:17.642259   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:17.642304   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:17.642311   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:17.642333   15447 retry.go:31] will retry after 1.063545765s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:18.708313   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:18.761513   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:18.761556   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:18.761564   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:18.761587   15447 retry.go:31] will retry after 1.832395383s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:20.594267   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:20.644769   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:20.644809   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:20.644817   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:20.644849   15447 retry.go:31] will retry after 3.171148567s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:23.818494   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:23.870491   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:23.870536   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:23.870543   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:23.870564   15447 retry.go:31] will retry after 4.047630777s: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:27.920170   15447 cli_runner.go:164] Run: docker container inspect multinode-888000 --format={{.State.Status}}
	W0429 05:16:27.971908   15447 cli_runner.go:211] docker container inspect multinode-888000 --format={{.State.Status}} returned with exit code 1
	I0429 05:16:27.971951   15447 oci.go:662] temporary error verifying shutdown: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	I0429 05:16:27.971962   15447 oci.go:664] temporary error: container multinode-888000 status is  but expect it to be exited
	I0429 05:16:27.971992   15447 oci.go:88] couldn't shut down multinode-888000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000
	 
	I0429 05:16:27.972078   15447 cli_runner.go:164] Run: docker rm -f -v multinode-888000
	I0429 05:16:28.035239   15447 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-888000
	W0429 05:16:28.083554   15447 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-888000 returned with exit code 1
	I0429 05:16:28.083677   15447 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:16:28.130935   15447 cli_runner.go:164] Run: docker network rm multinode-888000
	I0429 05:16:28.237255   15447 fix.go:124] Sleeping 1 second for extra luck!
	I0429 05:16:29.238981   15447 start.go:125] createHost starting for "" (driver="docker")
	I0429 05:16:29.262049   15447 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 05:16:29.262165   15447 start.go:159] libmachine.API.Create for "multinode-888000" (driver="docker")
	I0429 05:16:29.262198   15447 client.go:168] LocalClient.Create starting
	I0429 05:16:29.262345   15447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/ca.pem
	I0429 05:16:29.262424   15447 main.go:141] libmachine: Decoding PEM data...
	I0429 05:16:29.262441   15447 main.go:141] libmachine: Parsing certificate...
	I0429 05:16:29.262494   15447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18756-6674/.minikube/certs/cert.pem
	I0429 05:16:29.262570   15447 main.go:141] libmachine: Decoding PEM data...
	I0429 05:16:29.262591   15447 main.go:141] libmachine: Parsing certificate...
	I0429 05:16:29.263000   15447 cli_runner.go:164] Run: docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 05:16:29.315066   15447 cli_runner.go:211] docker network inspect multinode-888000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 05:16:29.315157   15447 network_create.go:281] running [docker network inspect multinode-888000] to gather additional debugging logs...
	I0429 05:16:29.315173   15447 cli_runner.go:164] Run: docker network inspect multinode-888000
	W0429 05:16:29.363416   15447 cli_runner.go:211] docker network inspect multinode-888000 returned with exit code 1
	I0429 05:16:29.363445   15447 network_create.go:284] error running [docker network inspect multinode-888000]: docker network inspect multinode-888000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-888000 not found
	I0429 05:16:29.363455   15447 network_create.go:286] output of [docker network inspect multinode-888000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-888000 not found
	
	** /stderr **
	I0429 05:16:29.363578   15447 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 05:16:29.413669   15447 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:16:29.415299   15447 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:16:29.415719   15447 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024e04c0}
	I0429 05:16:29.415736   15447 network_create.go:124] attempt to create docker network multinode-888000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 05:16:29.415802   15447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	W0429 05:16:29.463787   15447 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000 returned with exit code 1
	W0429 05:16:29.463824   15447 network_create.go:149] failed to create docker network multinode-888000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 05:16:29.463846   15447 network_create.go:116] failed to create docker network multinode-888000 192.168.67.0/24, will retry: subnet is taken
	I0429 05:16:29.465474   15447 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 05:16:29.465855   15447 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025151c0}
	I0429 05:16:29.465867   15447 network_create.go:124] attempt to create docker network multinode-888000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 05:16:29.465942   15447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-888000 multinode-888000
	I0429 05:16:29.550247   15447 network_create.go:108] docker network multinode-888000 192.168.76.0/24 created
	I0429 05:16:29.550301   15447 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-888000" container
	I0429 05:16:29.550414   15447 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 05:16:29.599640   15447 cli_runner.go:164] Run: docker volume create multinode-888000 --label name.minikube.sigs.k8s.io=multinode-888000 --label created_by.minikube.sigs.k8s.io=true
	I0429 05:16:29.648187   15447 oci.go:103] Successfully created a docker volume multinode-888000
	I0429 05:16:29.648298   15447 cli_runner.go:164] Run: docker run --rm --name multinode-888000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-888000 --entrypoint /usr/bin/test -v multinode-888000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 05:16:29.892209   15447 oci.go:107] Successfully prepared a docker volume multinode-888000
	I0429 05:16:29.892253   15447 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 05:16:29.892266   15447 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 05:16:29.892353   15447 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-888000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-888000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-888000
helpers_test.go:235: (dbg) docker inspect multinode-888000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-888000",
	        "Id": "51d73cdd1fa2991c8a80c5cbe581f086652b492eed8914b57e5fe56a8ccb54ba",
	        "Created": "2024-04-29T12:16:29.510508689Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-888000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-888000 -n multinode-888000: exit status 7 (114.476992ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:17:27.177338   15563 status.go:249] status error: host: state: unknown state "multinode-888000": docker container inspect multinode-888000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-888000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-888000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (74.24s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-983000 --memory=2048 --driver=docker 
E0429 05:22:35.561037    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:23:20.463007    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-983000 --memory=2048 --driver=docker : signal: killed (5m0.0051712s)

                                                
                                                
-- stdout --
	* [scheduled-stop-983000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-983000" primary control-plane node in "scheduled-stop-983000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-983000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-983000" primary control-plane node in "scheduled-stop-983000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-29 05:24:40.950416 -0700 PDT m=+4989.682838032
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-983000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-983000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-983000",
	        "Id": "70d3b83e79f7ac5fbcf54c818681d327144d0546b2f9cdcf979ef08b81ecb885",
	        "Created": "2024-04-29T12:19:42.084200361Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-983000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-983000 -n scheduled-stop-983000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-983000 -n scheduled-stop-983000: exit status 7 (113.541341ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:24:41.116478   16092 status.go:249] status error: host: state: unknown state "scheduled-stop-983000": docker container inspect scheduled-stop-983000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-983000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-983000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-983000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-983000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2031066564 version
E0429 05:24:43.512765    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
skaffold_test.go:59: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2031066564 version: (1.457394916s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-236000 --memory=2600 --driver=docker 
E0429 05:27:35.570649    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:28:20.473171    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-236000 --memory=2600 --driver=docker : signal: killed (4m57.376860953s)

                                                
                                                
-- stdout --
	* [skaffold-236000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-236000" primary control-plane node in "skaffold-236000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-236000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-236000" primary control-plane node in "skaffold-236000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-29 05:29:41.855184 -0700 PDT m=+5290.578579560
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-236000
helpers_test.go:235: (dbg) docker inspect skaffold-236000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-236000",
	        "Id": "0c618da67165202833c8e7aa3437506484c666414d9b093bf7db9932fd257d73",
	        "Created": "2024-04-29T12:24:45.621116821Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-236000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-236000 -n skaffold-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-236000 -n skaffold-236000: exit status 7 (113.760907ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 05:29:42.021569   16291 status.go:249] status error: host: state: unknown state "skaffold-236000": docker container inspect skaffold-236000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-236000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-236000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-236000
--- FAIL: TestSkaffold (300.95s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-572000 --memory=2048 --output=json --wait=true --driver=docker 
E0429 05:32:35.579105    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:33:20.482246    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-572000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.00544574s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5fce29ba-cafe-4809-b3e6-93e193fe313e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-572000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26110e49-582c-4813-ae41-55782a0b9987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18756"}}
	{"specversion":"1.0","id":"2cbce3a7-1b36-438b-8685-078311ce6fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig"}}
	{"specversion":"1.0","id":"94d5bbf2-0a02-4393-86fe-069c87aa1312","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ae3b02bb-49da-4d8c-8c02-b32b548b73b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8a639ca5-467d-4eb3-825c-fb4ef10aa935","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube"}}
	{"specversion":"1.0","id":"fa356071-2425-448c-9047-b339a08a0c90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"391e68ee-f3e8-4630-ab72-755c4eb34d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"386214cf-a668-46ae-b84b-3c8a40b56be0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"10771942-c8df-400b-853d-9b1e0c2baa2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e74c168-58d3-4f6c-8aaf-fdd9c80434fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"58f63c1c-31fc-42dd-b76a-585fa962eee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-572000\" primary control-plane node in \"insufficient-storage-572000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ec26435-3a9e-4e79-98df-4e3db0d84229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713736339-18706 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a825149a-fef5-4d0d-bc43-f1a4868f8468","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-572000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-572000 --output=json --layout=cluster: context deadline exceeded (809ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-572000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-572000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    

Test pass (162/201)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.05
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.63
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 7.06
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.3
18 TestDownloadOnly/v1.30.0/DeleteAll 0.63
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
20 TestDownloadOnlyKic 1.9
21 TestBinaryMirror 1.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.18
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 339.08
31 TestAddons/parallel/InspektorGadget 11.79
32 TestAddons/parallel/MetricsServer 5.74
33 TestAddons/parallel/HelmTiller 10.39
35 TestAddons/parallel/CSI 48.05
36 TestAddons/parallel/Headlamp 12.1
37 TestAddons/parallel/CloudSpanner 5.64
38 TestAddons/parallel/LocalPath 55.03
39 TestAddons/parallel/NvidiaDevicePlugin 5.65
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/StoppedEnableDisable 11.86
52 TestHyperKitDriverInstallOrUpdate 7.74
55 TestErrorSpam/setup 20.18
56 TestErrorSpam/start 2.53
57 TestErrorSpam/status 1.18
58 TestErrorSpam/pause 1.65
59 TestErrorSpam/unpause 1.65
60 TestErrorSpam/stop 2.81
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 75.06
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.24
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 11.44
72 TestFunctional/serial/CacheCmd/cache/add_local 1.6
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
76 TestFunctional/serial/CacheCmd/cache/cache_reload 3.57
77 TestFunctional/serial/CacheCmd/cache/delete 0.18
78 TestFunctional/serial/MinikubeKubectlCmd 1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.44
80 TestFunctional/serial/ExtraConfig 41.03
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 3.03
83 TestFunctional/serial/LogsFileCmd 3.1
84 TestFunctional/serial/InvalidService 4.84
86 TestFunctional/parallel/ConfigCmd 0.55
87 TestFunctional/parallel/DashboardCmd 23.06
88 TestFunctional/parallel/DryRun 1.37
89 TestFunctional/parallel/InternationalLanguage 0.67
90 TestFunctional/parallel/StatusCmd 1.2
95 TestFunctional/parallel/AddonsCmd 0.28
96 TestFunctional/parallel/PersistentVolumeClaim 30.76
98 TestFunctional/parallel/SSHCmd 0.8
99 TestFunctional/parallel/CpCmd 2.69
100 TestFunctional/parallel/MySQL 30.28
101 TestFunctional/parallel/FileSync 0.48
102 TestFunctional/parallel/CertSync 2.56
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
110 TestFunctional/parallel/License 0.55
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.19
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
122 TestFunctional/parallel/ServiceCmd/DeployApp 8.12
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
124 TestFunctional/parallel/ProfileCmd/profile_list 0.53
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
126 TestFunctional/parallel/MountCmd/any-port 7.56
127 TestFunctional/parallel/ServiceCmd/List 0.6
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
129 TestFunctional/parallel/ServiceCmd/HTTPS 15
130 TestFunctional/parallel/MountCmd/specific-port 2.26
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.7
132 TestFunctional/parallel/ServiceCmd/Format 15
133 TestFunctional/parallel/ServiceCmd/URL 15
134 TestFunctional/parallel/Version/short 0.12
135 TestFunctional/parallel/Version/components 0.96
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.41
141 TestFunctional/parallel/ImageCommands/Setup 2.19
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.92
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.37
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.73
145 TestFunctional/parallel/DockerEnv/bash 1.74
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.2
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.32
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.71
153 TestFunctional/delete_addon-resizer_images 0.13
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.05
159 TestMultiControlPlane/serial/StartCluster 104.33
160 TestMultiControlPlane/serial/DeployApp 105.35
161 TestMultiControlPlane/serial/PingHostFromPods 1.45
162 TestMultiControlPlane/serial/AddWorkerNode 19.27
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
165 TestMultiControlPlane/serial/CopyFile 25.04
166 TestMultiControlPlane/serial/StopSecondaryNode 11.89
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
168 TestMultiControlPlane/serial/RestartSecondaryNode 25.58
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.48
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 235.48
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.8
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
173 TestMultiControlPlane/serial/StopCluster 32.75
174 TestMultiControlPlane/serial/RestartCluster 91.74
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
176 TestMultiControlPlane/serial/AddSecondaryNode 39.45
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
180 TestImageBuild/serial/Setup 21.34
181 TestImageBuild/serial/NormalBuild 4.19
182 TestImageBuild/serial/BuildWithBuildArg 1.55
183 TestImageBuild/serial/BuildWithDockerIgnore 1.34
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.28
188 TestJSONOutput/start/Command 36.22
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.57
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.59
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.75
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.76
213 TestKicCustomNetwork/create_custom_network 22.17
214 TestKicCustomNetwork/use_default_bridge_network 21.64
215 TestKicExistingNetwork 22.24
216 TestKicCustomSubnet 23.61
217 TestKicStaticIP 22.31
218 TestMainNoArgs 0.09
219 TestMinikubeProfile 47.15
222 TestMountStart/serial/StartWithMountFirst 7.18
242 TestPreload 132.79
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.69
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.82
x
+
TestDownloadOnly/v1.20.0/json-events (11.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-862000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-862000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (11.045979674s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-862000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-862000: exit status 85 (304.043078ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-862000 | jenkins | v1.33.0 | 29 Apr 24 04:01 PDT |          |
	|         | -p download-only-862000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 04:01:31
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 04:01:31.156373    7117 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:01:31.157173    7117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:01:31.157182    7117 out.go:304] Setting ErrFile to fd 2...
	I0429 04:01:31.157187    7117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:01:31.157733    7117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	W0429 04:01:31.157861    7117 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18756-6674/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18756-6674/.minikube/config/config.json: no such file or directory
	I0429 04:01:31.159690    7117 out.go:298] Setting JSON to true
	I0429 04:01:31.182371    7117 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1861,"bootTime":1714386630,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 04:01:31.182476    7117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:01:31.203899    7117 out.go:97] [download-only-862000] minikube v1.33.0 on Darwin 14.4.1
	I0429 04:01:31.225755    7117 out.go:169] MINIKUBE_LOCATION=18756
	I0429 04:01:31.204077    7117 notify.go:220] Checking for updates...
	W0429 04:01:31.204086    7117 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 04:01:31.268707    7117 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 04:01:31.289791    7117 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 04:01:31.331658    7117 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:01:31.373721    7117 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	W0429 04:01:31.415493    7117 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 04:01:31.415864    7117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:01:31.469439    7117 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 04:01:31.469590    7117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:01:31.578016    7117 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:97 SystemTime:2024-04-29 11:01:31.566599317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:01:31.599342    7117 out.go:97] Using the docker driver based on user configuration
	I0429 04:01:31.599394    7117 start.go:297] selected driver: docker
	I0429 04:01:31.599416    7117 start.go:901] validating driver "docker" against <nil>
	I0429 04:01:31.599623    7117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:01:31.716240    7117 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:97 SystemTime:2024-04-29 11:01:31.704840157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:01:31.716435    7117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:01:31.719555    7117 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0429 04:01:31.719712    7117 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:01:31.741219    7117 out.go:169] Using Docker Desktop driver with root privileges
	I0429 04:01:31.762205    7117 cni.go:84] Creating CNI manager for ""
	I0429 04:01:31.762259    7117 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 04:01:31.762384    7117 start.go:340] cluster config:
	{Name:download-only-862000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:01:31.785128    7117 out.go:97] Starting "download-only-862000" primary control-plane node in "download-only-862000" cluster
	I0429 04:01:31.785205    7117 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 04:01:31.806269    7117 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 04:01:31.806361    7117 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:01:31.806468    7117 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 04:01:31.856043    7117 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 04:01:31.856320    7117 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 04:01:31.856466    7117 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 04:01:31.867044    7117 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 04:01:31.867078    7117 cache.go:56] Caching tarball of preloaded images
	I0429 04:01:31.867345    7117 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:01:31.889275    7117 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 04:01:31.889303    7117 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 04:01:31.964239    7117 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 04:01:36.756980    7117 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 04:01:36.757228    7117 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 04:01:37.368499    7117 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 04:01:37.368767    7117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/download-only-862000/config.json ...
	I0429 04:01:37.368793    7117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/download-only-862000/config.json: {Name:mkc8b588187104aa7d3f1035922ef2fc94d19a97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 04:01:37.369157    7117 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 04:01:37.369477    7117 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	I0429 04:01:41.152852    7117 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	
	
	* The control-plane node download-only-862000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-862000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-862000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (7.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-620000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-620000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker : (7.06354904s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (7.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-620000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-620000: exit status 85 (300.811175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-862000 | jenkins | v1.33.0 | 29 Apr 24 04:01 PDT |                     |
	|         | -p download-only-862000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 04:01 PDT | 29 Apr 24 04:01 PDT |
	| delete  | -p download-only-862000        | download-only-862000 | jenkins | v1.33.0 | 29 Apr 24 04:01 PDT | 29 Apr 24 04:01 PDT |
	| start   | -o=json --download-only        | download-only-620000 | jenkins | v1.33.0 | 29 Apr 24 04:01 PDT |                     |
	|         | -p download-only-620000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 04:01:43
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 04:01:43.510747    7191 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:01:43.510956    7191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:01:43.510961    7191 out.go:304] Setting ErrFile to fd 2...
	I0429 04:01:43.510965    7191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:01:43.511152    7191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 04:01:43.512652    7191 out.go:298] Setting JSON to true
	I0429 04:01:43.535550    7191 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1873,"bootTime":1714386630,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 04:01:43.535638    7191 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:01:43.557061    7191 out.go:97] [download-only-620000] minikube v1.33.0 on Darwin 14.4.1
	I0429 04:01:43.578847    7191 out.go:169] MINIKUBE_LOCATION=18756
	I0429 04:01:43.557299    7191 notify.go:220] Checking for updates...
	I0429 04:01:43.620781    7191 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 04:01:43.647691    7191 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 04:01:43.668923    7191 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:01:43.689780    7191 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	W0429 04:01:43.731686    7191 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 04:01:43.732189    7191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:01:43.787281    7191 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 04:01:43.787419    7191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:01:43.900513    7191 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:97 SystemTime:2024-04-29 11:01:43.889493338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:01:43.921468    7191 out.go:97] Using the docker driver based on user configuration
	I0429 04:01:43.921539    7191 start.go:297] selected driver: docker
	I0429 04:01:43.921560    7191 start.go:901] validating driver "docker" against <nil>
	I0429 04:01:43.921772    7191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:01:44.033050    7191 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:97 SystemTime:2024-04-29 11:01:44.021883223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:01:44.033258    7191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 04:01:44.036121    7191 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0429 04:01:44.036271    7191 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 04:01:44.057981    7191 out.go:169] Using Docker Desktop driver with root privileges
	I0429 04:01:44.079956    7191 cni.go:84] Creating CNI manager for ""
	I0429 04:01:44.079999    7191 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 04:01:44.080018    7191 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 04:01:44.080135    7191 start.go:340] cluster config:
	{Name:download-only-620000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:01:44.101682    7191 out.go:97] Starting "download-only-620000" primary control-plane node in "download-only-620000" cluster
	I0429 04:01:44.101746    7191 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 04:01:44.122838    7191 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 04:01:44.122933    7191 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:01:44.123028    7191 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 04:01:44.172794    7191 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 04:01:44.172964    7191 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 04:01:44.172981    7191 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 04:01:44.172987    7191 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 04:01:44.172996    7191 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 04:01:44.174671    7191 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 04:01:44.174682    7191 cache.go:56] Caching tarball of preloaded images
	I0429 04:01:44.174850    7191 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 04:01:44.198547    7191 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 04:01:44.198568    7191 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 04:01:44.269355    7191 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 04:01:48.451158    7191 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 04:01:48.451338    7191 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18756-6674/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-620000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-620000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-620000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.9s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-396000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-396000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-396000
--- PASS: TestDownloadOnlyKic (1.90s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-607000 --alsologtostderr --binary-mirror http://127.0.0.1:52268 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-607000 --alsologtostderr --binary-mirror http://127.0.0.1:52268 --driver=docker : (1.006368262s)
helpers_test.go:175: Cleaning up "binary-mirror-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-607000
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-816000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-816000: exit status 85 (175.140908ms)

                                                
                                                
-- stdout --
	* Profile "addons-816000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-816000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-816000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-816000: exit status 85 (196.13549ms)

                                                
                                                
-- stdout --
	* Profile "addons-816000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-816000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (339.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-816000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-816000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m39.083286356s)
--- PASS: TestAddons/Setup (339.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zf4vn" [30742685-12ab-44bf-b135-73aba49ac98f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005075042s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-816000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-816000: (5.784749709s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 1.924653ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-2qqmn" [0593bd78-846e-4d86-abf9-56f541a06005] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004864904s
addons_test.go:415: (dbg) Run:  kubectl --context addons-816000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.677871ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-clszw" [7ffbfd7c-7309-4274-a90e-39e47cd8cbe7] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00450656s
addons_test.go:473: (dbg) Run:  kubectl --context addons-816000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-816000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.720926615s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.555006ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-816000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-816000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ecdd1954-9468-489c-aec3-0e90da924575] Pending
helpers_test.go:344: "task-pv-pod" [ecdd1954-9468-489c-aec3-0e90da924575] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ecdd1954-9468-489c-aec3-0e90da924575] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.005779713s
addons_test.go:584: (dbg) Run:  kubectl --context addons-816000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-816000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-816000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-816000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-816000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-816000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-816000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [faa7340b-cddb-431b-97af-806fec03141c] Pending
helpers_test.go:344: "task-pv-pod-restore" [faa7340b-cddb-431b-97af-806fec03141c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [faa7340b-cddb-431b-97af-806fec03141c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004281532s
addons_test.go:626: (dbg) Run:  kubectl --context addons-816000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-816000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-816000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-816000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.689107205s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-816000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-816000 --alsologtostderr -v=1: (1.09191238s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-24f4z" [b689fe3c-6c71-4990-b28b-72a871d0fc7e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-24f4z" [b689fe3c-6c71-4990-b28b-72a871d0fc7e] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-24f4z" [b689fe3c-6c71-4990-b28b-72a871d0fc7e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003942484s
--- PASS: TestAddons/parallel/Headlamp (12.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-4msh6" [0418eaf5-fc97-402b-973a-c5b6e50510c4] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005050142s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-816000
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-816000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-816000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [58d9d9cb-9355-40fb-9ffc-84bef1ab6ae4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [58d9d9cb-9355-40fb-9ffc-84bef1ab6ae4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [58d9d9cb-9355-40fb-9ffc-84bef1ab6ae4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004654476s
addons_test.go:891: (dbg) Run:  kubectl --context addons-816000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 ssh "cat /opt/local-path-provisioner/pvc-aa77554a-b5fe-4fb9-b8fa-310aa87e160b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-816000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-816000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-816000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.087719173s)
--- PASS: TestAddons/parallel/LocalPath (55.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-krrrp" [fdcea396-59d7-4b3f-be2d-b962ea451d63] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005359073s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-816000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-d77v5" [4496fa5b-ef17-4277-aabd-057adb50818d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004275335s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-816000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-816000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.86s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-816000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-816000: (11.132692671s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-816000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-816000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-816000
--- PASS: TestAddons/StoppedEnableDisable (11.86s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.74s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.74s)

                                                
                                    
x
+
TestErrorSpam/setup (20.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-314000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-314000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 --driver=docker : (20.175885436s)
--- PASS: TestErrorSpam/setup (20.18s)

                                                
                                    
x
+
TestErrorSpam/start (2.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 start --dry-run
--- PASS: TestErrorSpam/start (2.53s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (2.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 stop: (2.160076719s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-314000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-314000 stop
--- PASS: TestErrorSpam/stop (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18756-6674/.minikube/files/etc/test/nested/copy/7115/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-653000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-653000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m15.054526356s)
--- PASS: TestFunctional/serial/StartWithProxy (75.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-653000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-653000 --alsologtostderr -v=8: (29.23735523s)
functional_test.go:659: soft start took 29.237817513s for "functional-653000" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-653000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (11.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 cache add registry.k8s.io/pause:3.1: (3.936056843s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 cache add registry.k8s.io/pause:3.3: (3.942848615s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 cache add registry.k8s.io/pause:latest: (3.562062138s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (11.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3845458941/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cache add minikube-local-cache-test:functional-653000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 cache add minikube-local-cache-test:functional-653000: (1.059877837s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cache delete minikube-local-cache-test:functional-653000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-653000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (380.68991ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 cache reload: (2.402499345s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 kubectl -- --context functional-653000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-653000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-653000 get pods: (1.442321119s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-653000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 04:12:35.326056    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.333683    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.344084    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.364422    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.405225    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.486514    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.647422    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:35.967616    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:36.607870    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:37.888460    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:40.449609    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:45.570809    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:12:55.811380    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-653000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.030009168s)
functional_test.go:757: restart took 41.030194739s for "functional-653000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-653000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 logs: (3.031842691s)
--- PASS: TestFunctional/serial/LogsCmd (3.03s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2803724687/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2803724687/001/logs.txt: (3.101088104s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-653000 apply -f testdata/invalidsvc.yaml
E0429 04:13:16.292227    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-653000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-653000: exit status 115 (536.244927ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30136 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-653000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-653000 delete -f testdata/invalidsvc.yaml: (1.155203296s)
--- PASS: TestFunctional/serial/InvalidService (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 config get cpus: exit status 14 (68.449589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 config get cpus: exit status 14 (69.520976ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-653000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-653000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 9427: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-653000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-653000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (647.381165ms)

                                                
                                                
-- stdout --
	* [functional-653000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:14:07.804780    9364 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:14:07.804962    9364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:14:07.804967    9364 out.go:304] Setting ErrFile to fd 2...
	I0429 04:14:07.804971    9364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:14:07.805157    9364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 04:14:07.806512    9364 out.go:298] Setting JSON to false
	I0429 04:14:07.829115    9364 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2617,"bootTime":1714386630,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 04:14:07.829211    9364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:14:07.851189    9364 out.go:177] * [functional-653000] minikube v1.33.0 on Darwin 14.4.1
	I0429 04:14:07.894059    9364 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 04:14:07.894102    9364 notify.go:220] Checking for updates...
	I0429 04:14:07.937835    9364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 04:14:07.963701    9364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 04:14:07.984562    9364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:14:08.005823    9364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 04:14:08.026610    9364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:14:08.048141    9364 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:14:08.048912    9364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:14:08.103746    9364 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 04:14:08.103923    9364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:14:08.217763    9364 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:106 SystemTime:2024-04-29 11:14:08.207085449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:14:08.260075    9364 out.go:177] * Using the docker driver based on existing profile
	I0429 04:14:08.280942    9364 start.go:297] selected driver: docker
	I0429 04:14:08.280980    9364 start.go:901] validating driver "docker" against &{Name:functional-653000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-653000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:14:08.281107    9364 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:14:08.306130    9364 out.go:177] 
	W0429 04:14:08.327223    9364 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 04:14:08.347983    9364 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-653000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-653000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-653000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (671.079485ms)

                                                
                                                
-- stdout --
	* [functional-653000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18756
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:14:07.163080    9346 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:14:07.163570    9346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:14:07.163582    9346 out.go:304] Setting ErrFile to fd 2...
	I0429 04:14:07.163590    9346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:14:07.163976    9346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 04:14:07.185185    9346 out.go:298] Setting JSON to false
	I0429 04:14:07.209395    9346 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2617,"bootTime":1714386630,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0429 04:14:07.209497    9346 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 04:14:07.231170    9346 out.go:177] * [functional-653000] minikube v1.33.0 sur Darwin 14.4.1
	I0429 04:14:07.274344    9346 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 04:14:07.274403    9346 notify.go:220] Checking for updates...
	I0429 04:14:07.295154    9346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
	I0429 04:14:07.316258    9346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 04:14:07.337213    9346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 04:14:07.358124    9346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube
	I0429 04:14:07.379262    9346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 04:14:07.400936    9346 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:14:07.401679    9346 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 04:14:07.457053    9346 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 04:14:07.457228    9346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 04:14:07.570484    9346 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:106 SystemTime:2024-04-29 11:14:07.559079913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211084288 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 04:14:07.612769    9346 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0429 04:14:07.633526    9346 start.go:297] selected driver: docker
	I0429 04:14:07.633561    9346 start.go:901] validating driver "docker" against &{Name:functional-653000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-653000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 04:14:07.633703    9346 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 04:14:07.658751    9346 out.go:177] 
	W0429 04:14:07.679639    9346 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 04:14:07.700688    9346 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [34fd0b14-e5da-448e-91d1-bfb3cd95ad09] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005056634s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-653000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-653000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-653000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-653000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-653000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f3733fe7-a9a7-4fb6-989c-30884d3f467b] Pending
helpers_test.go:344: "sp-pod" [f3733fe7-a9a7-4fb6-989c-30884d3f467b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f3733fe7-a9a7-4fb6-989c-30884d3f467b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004620639s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-653000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-653000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-653000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [89cfbf94-e8ca-4de8-add3-95bc0351f2eb] Pending
helpers_test.go:344: "sp-pod" [89cfbf94-e8ca-4de8-add3-95bc0351f2eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [89cfbf94-e8ca-4de8-add3-95bc0351f2eb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004286609s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-653000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh -n functional-653000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cp functional-653000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd875610176/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh -n functional-653000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh -n functional-653000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-653000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-s5qnd" [b246ecf5-578d-4843-8edc-5d25a122e548] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-s5qnd" [b246ecf5-578d-4843-8edc-5d25a122e548] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.005285302s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-653000 exec mysql-64454c8b5c-s5qnd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-653000 exec mysql-64454c8b5c-s5qnd -- mysql -ppassword -e "show databases;": exit status 1 (118.147066ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-653000 exec mysql-64454c8b5c-s5qnd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-653000 exec mysql-64454c8b5c-s5qnd -- mysql -ppassword -e "show databases;": exit status 1 (113.008605ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-653000 exec mysql-64454c8b5c-s5qnd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7115/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /etc/test/nested/copy/7115/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7115.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /etc/ssl/certs/7115.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7115.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /usr/share/ca-certificates/7115.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/71152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /etc/ssl/certs/71152.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/71152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /usr/share/ca-certificates/71152.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-653000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh "sudo systemctl is-active crio": exit status 1 (454.571793ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-653000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-653000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-653000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-653000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8876: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-653000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-653000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4a4b55f5-40a4-4ae1-9823-5fc51cf53f2d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4a4b55f5-40a4-4ae1-9823-5fc51cf53f2d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005800246s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-653000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-653000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8931: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-653000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-653000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-smv65" [8f84e155-c69b-4091-926d-63581fb6365c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-smv65" [8f84e155-c69b-4091-926d-63581fb6365c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003131311s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "442.461547ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.261748ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "448.515646ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "86.501584ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3165423115/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714389233360615000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3165423115/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714389233360615000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3165423115/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714389233360615000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3165423115/001/test-1714389233360615000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.571843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 11:13 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 11:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 11:13 test-1714389233360615000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh cat /mount-9p/test-1714389233360615000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-653000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aa0daa4a-7388-45e7-8f8c-d6a65ca0733b] Pending
helpers_test.go:344: "busybox-mount" [aa0daa4a-7388-45e7-8f8c-d6a65ca0733b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aa0daa4a-7388-45e7-8f8c-d6a65ca0733b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aa0daa4a-7388-45e7-8f8c-d6a65ca0733b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004396285s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-653000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3165423115/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 service list -o json
E0429 04:13:57.255075    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
functional_test.go:1490: Took "616.232551ms" to run "out/minikube-darwin-amd64 -p functional-653000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 service --namespace=default --https --url hello-node: signal: killed (15.002742768s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53288

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:53288
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port4213219298/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.375177ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port4213219298/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh "sudo umount -f /mount-9p": exit status 1 (356.594977ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-653000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port4213219298/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup98943096/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup98943096/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup98943096/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T" /mount1: exit status 1 (495.323411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T" /mount1: (1.006244094s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-653000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup98943096/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup98943096/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-653000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup98943096/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 service hello-node --url --format={{.IP}}: signal: killed (15.003973244s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 service hello-node --url
2024/04/29 04:14:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 service hello-node --url: signal: killed (15.002406966s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53386

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53386
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-653000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-653000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-653000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-653000 image ls --format short --alsologtostderr:
I0429 04:14:56.303907    9768 out.go:291] Setting OutFile to fd 1 ...
I0429 04:14:56.304172    9768 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:14:56.304179    9768 out.go:304] Setting ErrFile to fd 2...
I0429 04:14:56.304183    9768 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:14:56.304395    9768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
I0429 04:14:56.305143    9768 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:14:56.305268    9768 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:14:56.305719    9768 cli_runner.go:164] Run: docker container inspect functional-653000 --format={{.State.Status}}
I0429 04:14:56.361270    9768 ssh_runner.go:195] Run: systemctl --version
I0429 04:14:56.361350    9768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-653000
I0429 04:14:56.415887    9768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53022 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/functional-653000/id_rsa Username:docker}
I0429 04:14:56.500818    9768 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-653000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-653000 | c6d4b3411f10c | 1.24MB |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-653000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-653000 | 9c5795ad46a70 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-653000 image ls --format table --alsologtostderr:
I0429 04:15:00.711165    9809 out.go:291] Setting OutFile to fd 1 ...
I0429 04:15:00.711477    9809 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:15:00.711484    9809 out.go:304] Setting ErrFile to fd 2...
I0429 04:15:00.711489    9809 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:15:00.711693    9809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
I0429 04:15:00.712411    9809 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:15:00.712543    9809 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:15:00.712991    9809 cli_runner.go:164] Run: docker container inspect functional-653000 --format={{.State.Status}}
I0429 04:15:00.773786    9809 ssh_runner.go:195] Run: systemctl --version
I0429 04:15:00.773876    9809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-653000
I0429 04:15:00.831143    9809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53022 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/functional-653000/id_rsa Username:docker}
I0429 04:15:00.917294    9809 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-653000 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"c6d4b3411f10ccf9fdf78bb98b950dc634ca91b0026bf3865ece30296375b40f","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-653000"],"size":"1240000"},{"id":"9c5795ad46a70d0eadb7b777d5bb43cae177ef7
0bf5c87eb7a3260334447bc5a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-653000"],"size":"30"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["regist
ry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-653000"],"size":"32900000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/libr
ary/nginx:alpine"],"size":"48300000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-653000 image ls --format json --alsologtostderr:
I0429 04:15:00.370278    9802 out.go:291] Setting OutFile to fd 1 ...
I0429 04:15:00.370533    9802 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:15:00.370539    9802 out.go:304] Setting ErrFile to fd 2...
I0429 04:15:00.370543    9802 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:15:00.370730    9802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
I0429 04:15:00.371421    9802 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:15:00.371537    9802 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:15:00.371997    9802 cli_runner.go:164] Run: docker container inspect functional-653000 --format={{.State.Status}}
I0429 04:15:00.433349    9802 ssh_runner.go:195] Run: systemctl --version
I0429 04:15:00.433464    9802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-653000
I0429 04:15:00.495901    9802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53022 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/functional-653000/id_rsa Username:docker}
I0429 04:15:00.588014    9802 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-653000 image ls --format yaml --alsologtostderr:
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 9c5795ad46a70d0eadb7b777d5bb43cae177ef70bf5c87eb7a3260334447bc5a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-653000
size: "30"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-653000
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-653000 image ls --format yaml --alsologtostderr:
I0429 04:14:56.630405    9774 out.go:291] Setting OutFile to fd 1 ...
I0429 04:14:56.630643    9774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:14:56.630650    9774 out.go:304] Setting ErrFile to fd 2...
I0429 04:14:56.630655    9774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:14:56.630853    9774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
I0429 04:14:56.631593    9774 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:14:56.631708    9774 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:14:56.632263    9774 cli_runner.go:164] Run: docker container inspect functional-653000 --format={{.State.Status}}
I0429 04:14:56.690692    9774 ssh_runner.go:195] Run: systemctl --version
I0429 04:14:56.690781    9774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-653000
I0429 04:14:56.749989    9774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53022 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/functional-653000/id_rsa Username:docker}
I0429 04:14:56.837823    9774 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-653000 ssh pgrep buildkitd: exit status 1 (415.519407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image build -t localhost/my-image:functional-653000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image build -t localhost/my-image:functional-653000 testdata/build --alsologtostderr: (2.662607615s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-653000 image build -t localhost/my-image:functional-653000 testdata/build --alsologtostderr:
I0429 04:14:57.386988    9790 out.go:291] Setting OutFile to fd 1 ...
I0429 04:14:57.387685    9790 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:14:57.387736    9790 out.go:304] Setting ErrFile to fd 2...
I0429 04:14:57.387750    9790 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 04:14:57.388650    9790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
I0429 04:14:57.389381    9790 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:14:57.390083    9790 config.go:182] Loaded profile config "functional-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 04:14:57.390534    9790 cli_runner.go:164] Run: docker container inspect functional-653000 --format={{.State.Status}}
I0429 04:14:57.454147    9790 ssh_runner.go:195] Run: systemctl --version
I0429 04:14:57.454220    9790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-653000
I0429 04:14:57.512827    9790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53022 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/functional-653000/id_rsa Username:docker}
I0429 04:14:57.601908    9790 build_images.go:161] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3197659394.tar
I0429 04:14:57.601991    9790 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 04:14:57.613738    9790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3197659394.tar
I0429 04:14:57.618676    9790 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3197659394.tar: stat -c "%s %y" /var/lib/minikube/build/build.3197659394.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3197659394.tar': No such file or directory
I0429 04:14:57.618713    9790 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3197659394.tar --> /var/lib/minikube/build/build.3197659394.tar (3072 bytes)
I0429 04:14:57.647260    9790 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3197659394
I0429 04:14:57.707117    9790 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3197659394 -xf /var/lib/minikube/build/build.3197659394.tar
I0429 04:14:57.718480    9790 docker.go:360] Building image: /var/lib/minikube/build/build.3197659394
I0429 04:14:57.718586    9790 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-653000 /var/lib/minikube/build/build.3197659394
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c6d4b3411f10ccf9fdf78bb98b950dc634ca91b0026bf3865ece30296375b40f done
#8 naming to localhost/my-image:functional-653000 done
#8 DONE 0.0s
I0429 04:14:59.923425    9790 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-653000 /var/lib/minikube/build/build.3197659394: (2.204761212s)
I0429 04:14:59.923512    9790 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3197659394
I0429 04:14:59.934442    9790 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3197659394.tar
I0429 04:14:59.945067    9790 build_images.go:217] Built localhost/my-image:functional-653000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3197659394.tar
I0429 04:14:59.945120    9790 build_images.go:133] succeeded building to: functional-653000
I0429 04:14:59.945134    9790 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.117044123s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-653000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image load --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image load --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr: (3.617109338s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image load --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image load --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr: (2.07904604s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.825010045s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-653000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image load --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image load --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr: (3.492108351s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-653000 docker-env) && out/minikube-darwin-amd64 status -p functional-653000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-653000 docker-env) && out/minikube-darwin-amd64 status -p functional-653000": (1.048067159s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-653000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image save gcr.io/google-containers/addon-resizer:functional-653000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image save gcr.io/google-containers/addon-resizer:functional-653000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.20103639s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image rm gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.98739822s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-653000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-653000 image save --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-653000 image save --daemon gcr.io/google-containers/addon-resizer:functional-653000 --alsologtostderr: (1.585254848s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-653000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-653000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-653000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-653000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-821000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-821000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m43.267216319s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: (1.066921376s)
--- PASS: TestMultiControlPlane/serial/StartCluster (104.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (105.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- rollout status deployment/busybox
E0429 04:17:35.331987    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-821000 -- rollout status deployment/busybox: (54.37389497s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0429 04:18:03.021795    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0429 04:18:20.233869    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:20.239059    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:20.249999    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:20.270324    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:20.310744    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:20.391091    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0429 04:18:20.552164    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0429 04:18:20.872333    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:21.512740    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:22.793253    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:25.354394    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0429 04:18:30.474824    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:18:40.715832    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-mh6pf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-pgt4c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-r6n5c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-mh6pf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-pgt4c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-r6n5c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-mh6pf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-pgt4c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-r6n5c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (105.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-mh6pf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-mh6pf -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-pgt4c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-pgt4c -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-r6n5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-821000 -- exec busybox-fc5497c4f-r6n5c -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-821000 -v=7 --alsologtostderr
E0429 04:19:01.196445    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-821000 -v=7 --alsologtostderr: (17.909598222s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: (1.362273714s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-821000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.140100517s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (25.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status --output json -v=7 --alsologtostderr: (1.389947016s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp testdata/cp-test.txt ha-821000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile2542809122/001/cp-test_ha-821000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000:/home/docker/cp-test.txt ha-821000-m02:/home/docker/cp-test_ha-821000_ha-821000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test_ha-821000_ha-821000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000:/home/docker/cp-test.txt ha-821000-m03:/home/docker/cp-test_ha-821000_ha-821000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test_ha-821000_ha-821000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000:/home/docker/cp-test.txt ha-821000-m04:/home/docker/cp-test_ha-821000_ha-821000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test_ha-821000_ha-821000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp testdata/cp-test.txt ha-821000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile2542809122/001/cp-test_ha-821000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m02:/home/docker/cp-test.txt ha-821000:/home/docker/cp-test_ha-821000-m02_ha-821000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test_ha-821000-m02_ha-821000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m02:/home/docker/cp-test.txt ha-821000-m03:/home/docker/cp-test_ha-821000-m02_ha-821000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test_ha-821000-m02_ha-821000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m02:/home/docker/cp-test.txt ha-821000-m04:/home/docker/cp-test_ha-821000-m02_ha-821000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test_ha-821000-m02_ha-821000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp testdata/cp-test.txt ha-821000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile2542809122/001/cp-test_ha-821000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m03:/home/docker/cp-test.txt ha-821000:/home/docker/cp-test_ha-821000-m03_ha-821000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test_ha-821000-m03_ha-821000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m03:/home/docker/cp-test.txt ha-821000-m02:/home/docker/cp-test_ha-821000-m03_ha-821000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test_ha-821000-m03_ha-821000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m03:/home/docker/cp-test.txt ha-821000-m04:/home/docker/cp-test_ha-821000-m03_ha-821000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test_ha-821000-m03_ha-821000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp testdata/cp-test.txt ha-821000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m04:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile2542809122/001/cp-test_ha-821000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m04:/home/docker/cp-test.txt ha-821000:/home/docker/cp-test_ha-821000-m04_ha-821000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000 "sudo cat /home/docker/cp-test_ha-821000-m04_ha-821000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m04:/home/docker/cp-test.txt ha-821000-m02:/home/docker/cp-test_ha-821000-m04_ha-821000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m02 "sudo cat /home/docker/cp-test_ha-821000-m04_ha-821000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 cp ha-821000-m04:/home/docker/cp-test.txt ha-821000-m03:/home/docker/cp-test_ha-821000-m04_ha-821000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 ssh -n ha-821000-m03 "sudo cat /home/docker/cp-test_ha-821000-m04_ha-821000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (25.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 node stop m02 -v=7 --alsologtostderr
E0429 04:19:42.117228    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 node stop m02 -v=7 --alsologtostderr: (10.870390289s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: exit status 7 (1.016599018s)

                                                
                                                
-- stdout --
	ha-821000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-821000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:19:48.494954   11128 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:19:48.495200   11128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:19:48.495208   11128 out.go:304] Setting ErrFile to fd 2...
	I0429 04:19:48.495213   11128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:19:48.495415   11128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 04:19:48.495610   11128 out.go:298] Setting JSON to false
	I0429 04:19:48.495635   11128 mustload.go:65] Loading cluster: ha-821000
	I0429 04:19:48.495677   11128 notify.go:220] Checking for updates...
	I0429 04:19:48.496457   11128 config.go:182] Loaded profile config "ha-821000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:19:48.496515   11128 status.go:255] checking status of ha-821000 ...
	I0429 04:19:48.497400   11128 cli_runner.go:164] Run: docker container inspect ha-821000 --format={{.State.Status}}
	I0429 04:19:48.547936   11128 status.go:330] ha-821000 host status = "Running" (err=<nil>)
	I0429 04:19:48.547974   11128 host.go:66] Checking if "ha-821000" exists ...
	I0429 04:19:48.548236   11128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-821000
	I0429 04:19:48.597479   11128 host.go:66] Checking if "ha-821000" exists ...
	I0429 04:19:48.597760   11128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 04:19:48.597822   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-821000
	I0429 04:19:48.648100   11128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53487 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/ha-821000/id_rsa Username:docker}
	I0429 04:19:48.731036   11128 ssh_runner.go:195] Run: systemctl --version
	I0429 04:19:48.735950   11128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 04:19:48.746855   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-821000
	I0429 04:19:48.797577   11128 kubeconfig.go:125] found "ha-821000" server: "https://127.0.0.1:53491"
	I0429 04:19:48.797606   11128 api_server.go:166] Checking apiserver status ...
	I0429 04:19:48.797644   11128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:19:48.808525   11128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2204/cgroup
	W0429 04:19:48.817407   11128 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2204/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:19:48.817464   11128 ssh_runner.go:195] Run: ls
	I0429 04:19:48.821415   11128 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53491/healthz ...
	I0429 04:19:48.825114   11128 api_server.go:279] https://127.0.0.1:53491/healthz returned 200:
	ok
	I0429 04:19:48.825134   11128 status.go:422] ha-821000 apiserver status = Running (err=<nil>)
	I0429 04:19:48.825148   11128 status.go:257] ha-821000 status: &{Name:ha-821000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 04:19:48.825158   11128 status.go:255] checking status of ha-821000-m02 ...
	I0429 04:19:48.825384   11128 cli_runner.go:164] Run: docker container inspect ha-821000-m02 --format={{.State.Status}}
	I0429 04:19:48.876135   11128 status.go:330] ha-821000-m02 host status = "Stopped" (err=<nil>)
	I0429 04:19:48.876189   11128 status.go:343] host is not running, skipping remaining checks
	I0429 04:19:48.876200   11128 status.go:257] ha-821000-m02 status: &{Name:ha-821000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 04:19:48.876220   11128 status.go:255] checking status of ha-821000-m03 ...
	I0429 04:19:48.876538   11128 cli_runner.go:164] Run: docker container inspect ha-821000-m03 --format={{.State.Status}}
	I0429 04:19:48.926250   11128 status.go:330] ha-821000-m03 host status = "Running" (err=<nil>)
	I0429 04:19:48.926273   11128 host.go:66] Checking if "ha-821000-m03" exists ...
	I0429 04:19:48.926515   11128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-821000-m03
	I0429 04:19:48.976975   11128 host.go:66] Checking if "ha-821000-m03" exists ...
	I0429 04:19:48.977253   11128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 04:19:48.977300   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-821000-m03
	I0429 04:19:49.027729   11128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53576 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/ha-821000-m03/id_rsa Username:docker}
	I0429 04:19:49.113475   11128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 04:19:49.124269   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-821000
	I0429 04:19:49.174683   11128 kubeconfig.go:125] found "ha-821000" server: "https://127.0.0.1:53491"
	I0429 04:19:49.174707   11128 api_server.go:166] Checking apiserver status ...
	I0429 04:19:49.174749   11128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 04:19:49.185548   11128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup
	W0429 04:19:49.195065   11128 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 04:19:49.195135   11128 ssh_runner.go:195] Run: ls
	I0429 04:19:49.198891   11128 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53491/healthz ...
	I0429 04:19:49.202559   11128 api_server.go:279] https://127.0.0.1:53491/healthz returned 200:
	ok
	I0429 04:19:49.202572   11128 status.go:422] ha-821000-m03 apiserver status = Running (err=<nil>)
	I0429 04:19:49.202580   11128 status.go:257] ha-821000-m03 status: &{Name:ha-821000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 04:19:49.202591   11128 status.go:255] checking status of ha-821000-m04 ...
	I0429 04:19:49.202845   11128 cli_runner.go:164] Run: docker container inspect ha-821000-m04 --format={{.State.Status}}
	I0429 04:19:49.253207   11128 status.go:330] ha-821000-m04 host status = "Running" (err=<nil>)
	I0429 04:19:49.253231   11128 host.go:66] Checking if "ha-821000-m04" exists ...
	I0429 04:19:49.253495   11128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-821000-m04
	I0429 04:19:49.303138   11128 host.go:66] Checking if "ha-821000-m04" exists ...
	I0429 04:19:49.303408   11128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 04:19:49.303458   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-821000-m04
	I0429 04:19:49.354642   11128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53740 SSHKeyPath:/Users/jenkins/minikube-integration/18756-6674/.minikube/machines/ha-821000-m04/id_rsa Username:docker}
	I0429 04:19:49.436679   11128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 04:19:49.447532   11128 status.go:257] ha-821000-m04 status: &{Name:ha-821000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 node start m02 -v=7 --alsologtostderr: (23.917045635s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: (1.594078456s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.478133139s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (235.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-821000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-821000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-821000 -v=7 --alsologtostderr: (34.322376486s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-821000 --wait=true -v=7 --alsologtostderr
E0429 04:21:04.032791    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:22:35.290816    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 04:23:20.191355    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 04:23:47.875592    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-821000 --wait=true -v=7 --alsologtostderr: (3m21.007528206s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-821000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (235.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 node delete m03 -v=7 --alsologtostderr: (10.671763629s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: (1.001968509s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 stop -v=7 --alsologtostderr: (32.54039237s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: exit status 7 (211.563227ms)

                                                
                                                
-- stdout --
	ha-821000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-821000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-821000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 04:24:58.013258   11825 out.go:291] Setting OutFile to fd 1 ...
	I0429 04:24:58.013946   11825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:24:58.013955   11825 out.go:304] Setting ErrFile to fd 2...
	I0429 04:24:58.013962   11825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 04:24:58.014648   11825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18756-6674/.minikube/bin
	I0429 04:24:58.014854   11825 out.go:298] Setting JSON to false
	I0429 04:24:58.014877   11825 mustload.go:65] Loading cluster: ha-821000
	I0429 04:24:58.014918   11825 notify.go:220] Checking for updates...
	I0429 04:24:58.015179   11825 config.go:182] Loaded profile config "ha-821000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 04:24:58.015192   11825 status.go:255] checking status of ha-821000 ...
	I0429 04:24:58.015565   11825 cli_runner.go:164] Run: docker container inspect ha-821000 --format={{.State.Status}}
	I0429 04:24:58.064379   11825 status.go:330] ha-821000 host status = "Stopped" (err=<nil>)
	I0429 04:24:58.064396   11825 status.go:343] host is not running, skipping remaining checks
	I0429 04:24:58.064404   11825 status.go:257] ha-821000 status: &{Name:ha-821000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 04:24:58.064424   11825 status.go:255] checking status of ha-821000-m02 ...
	I0429 04:24:58.064659   11825 cli_runner.go:164] Run: docker container inspect ha-821000-m02 --format={{.State.Status}}
	I0429 04:24:58.113553   11825 status.go:330] ha-821000-m02 host status = "Stopped" (err=<nil>)
	I0429 04:24:58.113577   11825 status.go:343] host is not running, skipping remaining checks
	I0429 04:24:58.113586   11825 status.go:257] ha-821000-m02 status: &{Name:ha-821000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 04:24:58.113600   11825 status.go:255] checking status of ha-821000-m04 ...
	I0429 04:24:58.113885   11825 cli_runner.go:164] Run: docker container inspect ha-821000-m04 --format={{.State.Status}}
	I0429 04:24:58.162263   11825 status.go:330] ha-821000-m04 host status = "Stopped" (err=<nil>)
	I0429 04:24:58.162303   11825 status.go:343] host is not running, skipping remaining checks
	I0429 04:24:58.162315   11825 status.go:257] ha-821000-m04 status: &{Name:ha-821000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-821000 --wait=true -v=7 --alsologtostderr --driver=docker 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-821000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m30.593686509s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: (1.015665572s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-821000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-821000 --control-plane -v=7 --alsologtostderr: (38.100573131s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-821000 status -v=7 --alsologtostderr: (1.348212958s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.10026798s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-282000 --driver=docker 
E0429 04:27:35.295014    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-282000 --driver=docker : (21.340295422s)
--- PASS: TestImageBuild/serial/Setup (21.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-282000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-282000: (4.19468163s)
--- PASS: TestImageBuild/serial/NormalBuild (4.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-282000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-282000: (1.552093307s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-282000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-282000: (1.344180411s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-282000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-282000: (1.283319098s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.28s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-546000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0429 04:28:20.195579    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-546000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.220361851s)
--- PASS: TestJSONOutput/start/Command (36.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-546000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-546000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-546000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-546000 --output=json --user=testUser: (5.75376206s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-016000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-016000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (385.040395ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9459cf6b-b0a8-41d7-8479-d8fed830c856","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-016000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc1e9173-9b77-45db-bd34-104a89f2a219","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18756"}}
	{"specversion":"1.0","id":"98be210f-6c8c-4098-b37d-4f3d7d6685a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig"}}
	{"specversion":"1.0","id":"c0a21e6e-3d87-43f0-a979-75246231e5ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a7097a1f-b421-417e-abbd-82e44cd37b92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"036fcae2-b3c5-4530-a887-f04156db6601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18756-6674/.minikube"}}
	{"specversion":"1.0","id":"4c99f1d5-118a-433c-a6d9-d520718996f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"621c5c62-af2e-4eee-be88-cad7c66ab5c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-016000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-016000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-461000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-461000 --network=: (19.777315964s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-461000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-461000
E0429 04:28:58.346165    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-461000: (2.342518136s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-297000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-297000 --network=bridge: (19.326540436s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-297000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-297000: (2.259736267s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.64s)

                                                
                                    
x
+
TestKicExistingNetwork (22.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-340000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-340000 --network=existing-network: (19.576601852s)
helpers_test.go:175: Cleaning up "existing-network-340000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-340000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-340000: (2.228298429s)
--- PASS: TestKicExistingNetwork (22.24s)

                                                
                                    
x
+
TestKicCustomSubnet (23.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-431000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-431000 --subnet=192.168.60.0/24: (21.199700987s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-431000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-431000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-431000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-431000: (2.363590215s)
--- PASS: TestKicCustomSubnet (23.61s)

                                                
                                    
x
+
TestKicStaticIP (22.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-664000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-664000 --static-ip=192.168.200.200: (19.742753726s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-664000 ip
helpers_test.go:175: Cleaning up "static-ip-664000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-664000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-664000: (2.332948806s)
--- PASS: TestKicStaticIP (22.31s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (47.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-673000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-673000 --driver=docker : (20.363154847s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-674000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-674000 --driver=docker : (20.1736718s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-673000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-674000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-674000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-674000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-674000: (2.385378303s)
helpers_test.go:175: Cleaning up "first-673000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-673000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-673000: (2.397026427s)
--- PASS: TestMinikubeProfile (47.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-750000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-750000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.17979905s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                    
x
+
TestPreload (132.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-897000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0429 05:17:35.544162    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
E0429 05:18:20.445439    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/functional-653000/client.crt: no such file or directory
E0429 05:18:58.598308    7115 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18756-6674/.minikube/profiles/addons-816000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-897000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m36.068904317s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-897000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-897000 image pull gcr.io/k8s-minikube/busybox: (1.480061684s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-897000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-897000: (10.766488327s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-897000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-897000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (21.700293907s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-897000 image list
helpers_test.go:175: Cleaning up "test-preload-897000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-897000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-897000: (2.451048033s)
--- PASS: TestPreload (132.79s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.69s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18756
- KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1831641306/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1831641306/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1831641306/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1831641306/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.82s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18756
- KUBECONFIG=/Users/jenkins/minikube-integration/18756-6674/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2836391609/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2836391609/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2836391609/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2836391609/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.82s)

                                                
                                    

Test skip (17/201)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 12.404209ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kbxlf" [32d35edc-895a-4403-bd98-b672b0843854] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00583391s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7kc99" [857f5417-d236-43a7-b237-a369f6802e1d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003923943s
addons_test.go:340: (dbg) Run:  kubectl --context addons-816000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-816000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-816000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.171426578s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (19.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-816000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-816000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-816000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2738e212-1490-4fce-9526-86e5b9ee030b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2738e212-1490-4fce-9526-86e5b9ee030b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004970564s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-816000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.80s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-653000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-653000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-rbfjf" [09a33b2d-da02-4321-8cc8-fc91eefb62ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-rbfjf" [09a33b2d-da02-4321-8cc8-fc91eefb62ff] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.004664392s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (17.16s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard