Test Report: Docker_macOS 18429

                    
                      ce47e36c27c610c668eed9e63157fcf5091ee2ba:2024-03-18:33630
                    
                

Test fail (22/211)

x
+
TestOffline (754.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-210000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-210000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m33.907129227s)

                                                
                                                
-- stdout --
	* [offline-docker-210000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-210000" primary control-plane node in "offline-docker-210000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-210000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:34:07.089064   21846 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:34:07.089249   21846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:34:07.089255   21846 out.go:304] Setting ErrFile to fd 2...
	I0318 07:34:07.089259   21846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:34:07.089453   21846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:34:07.090972   21846 out.go:298] Setting JSON to false
	I0318 07:34:07.114420   21846 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":14620,"bootTime":1710757827,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 07:34:07.114503   21846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 07:34:07.136320   21846 out.go:177] * [offline-docker-210000] minikube v1.32.0 on Darwin 14.3.1
	I0318 07:34:07.178142   21846 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 07:34:07.178167   21846 notify.go:220] Checking for updates...
	I0318 07:34:07.219889   21846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 07:34:07.262036   21846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 07:34:07.304021   21846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 07:34:07.325113   21846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 07:34:07.345821   21846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 07:34:07.367158   21846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 07:34:07.423978   21846 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 07:34:07.424143   21846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:34:07.577764   21846 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:2 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:false NGoroutines:200 SystemTime:2024-03-18 14:34:07.554637833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:34:07.619562   21846 out.go:177] * Using the docker driver based on user configuration
	I0318 07:34:07.640565   21846 start.go:297] selected driver: docker
	I0318 07:34:07.640580   21846 start.go:901] validating driver "docker" against <nil>
	I0318 07:34:07.640604   21846 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 07:34:07.643815   21846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:34:07.748814   21846 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:2 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:false NGoroutines:200 SystemTime:2024-03-18 14:34:07.738138437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:34:07.748977   21846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 07:34:07.749175   21846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 07:34:07.770578   21846 out.go:177] * Using Docker Desktop driver with root privileges
	I0318 07:34:07.792176   21846 cni.go:84] Creating CNI manager for ""
	I0318 07:34:07.792225   21846 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 07:34:07.792239   21846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 07:34:07.792389   21846 start.go:340] cluster config:
	{Name:offline-docker-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:34:07.857058   21846 out.go:177] * Starting "offline-docker-210000" primary control-plane node in "offline-docker-210000" cluster
	I0318 07:34:07.899892   21846 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 07:34:07.942939   21846 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 07:34:08.005729   21846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:34:08.005758   21846 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 07:34:08.005781   21846 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 07:34:08.005793   21846 cache.go:56] Caching tarball of preloaded images
	I0318 07:34:08.005913   21846 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 07:34:08.005924   21846 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 07:34:08.006800   21846 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/offline-docker-210000/config.json ...
	I0318 07:34:08.006873   21846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/offline-docker-210000/config.json: {Name:mk87f6df3868ab47b2c3302cf0bc4053193ec6a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 07:34:08.057814   21846 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 07:34:08.057838   21846 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 07:34:08.057855   21846 cache.go:194] Successfully downloaded all kic artifacts
	I0318 07:34:08.057889   21846 start.go:360] acquireMachinesLock for offline-docker-210000: {Name:mk579cee37e2a2f4db3f462cdf0a7cecd9ab2263 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:34:08.058035   21846 start.go:364] duration metric: took 134.902µs to acquireMachinesLock for "offline-docker-210000"
	I0318 07:34:08.058060   21846 start.go:93] Provisioning new machine with config: &{Name:offline-docker-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-210000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 07:34:08.058129   21846 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:34:08.079730   21846 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:34:08.079930   21846 start.go:159] libmachine.API.Create for "offline-docker-210000" (driver="docker")
	I0318 07:34:08.079956   21846 client.go:168] LocalClient.Create starting
	I0318 07:34:08.080066   21846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:34:08.080111   21846 main.go:141] libmachine: Decoding PEM data...
	I0318 07:34:08.080129   21846 main.go:141] libmachine: Parsing certificate...
	I0318 07:34:08.080232   21846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:34:08.080282   21846 main.go:141] libmachine: Decoding PEM data...
	I0318 07:34:08.080296   21846 main.go:141] libmachine: Parsing certificate...
	I0318 07:34:08.101385   21846 cli_runner.go:164] Run: docker network inspect offline-docker-210000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:34:08.193918   21846 cli_runner.go:211] docker network inspect offline-docker-210000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:34:08.194010   21846 network_create.go:281] running [docker network inspect offline-docker-210000] to gather additional debugging logs...
	I0318 07:34:08.194026   21846 cli_runner.go:164] Run: docker network inspect offline-docker-210000
	W0318 07:34:08.245608   21846 cli_runner.go:211] docker network inspect offline-docker-210000 returned with exit code 1
	I0318 07:34:08.245648   21846 network_create.go:284] error running [docker network inspect offline-docker-210000]: docker network inspect offline-docker-210000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-210000 not found
	I0318 07:34:08.245663   21846 network_create.go:286] output of [docker network inspect offline-docker-210000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-210000 not found
	
	** /stderr **
	I0318 07:34:08.245790   21846 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:34:08.346440   21846 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:34:08.347920   21846 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:34:08.349339   21846 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:34:08.349725   21846 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000987e40}
	I0318 07:34:08.349739   21846 network_create.go:124] attempt to create docker network offline-docker-210000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0318 07:34:08.349816   21846 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-210000 offline-docker-210000
	I0318 07:34:08.439387   21846 network_create.go:108] docker network offline-docker-210000 192.168.76.0/24 created
	I0318 07:34:08.439451   21846 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-210000" container
	I0318 07:34:08.439555   21846 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:34:08.492263   21846 cli_runner.go:164] Run: docker volume create offline-docker-210000 --label name.minikube.sigs.k8s.io=offline-docker-210000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:34:08.544982   21846 oci.go:103] Successfully created a docker volume offline-docker-210000
	I0318 07:34:08.545097   21846 cli_runner.go:164] Run: docker run --rm --name offline-docker-210000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-210000 --entrypoint /usr/bin/test -v offline-docker-210000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:34:09.028104   21846 oci.go:107] Successfully prepared a docker volume offline-docker-210000
	I0318 07:34:09.028148   21846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:34:09.028161   21846 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:34:09.028257   21846 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-210000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:40:08.121641   21846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:40:08.121776   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:08.174225   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:40:08.174355   21846 retry.go:31] will retry after 258.002796ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:08.433414   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:08.484964   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:40:08.485075   21846 retry.go:31] will retry after 481.248704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:08.968788   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:09.022107   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:40:09.022205   21846 retry.go:31] will retry after 525.411842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:09.548555   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:09.601262   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	W0318 07:40:09.601367   21846 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	
	W0318 07:40:09.601412   21846 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:09.601467   21846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:40:09.601525   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:09.650657   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:40:09.650757   21846 retry.go:31] will retry after 160.77592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:09.813881   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:09.866159   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:40:09.866266   21846 retry.go:31] will retry after 490.897492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:10.358368   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:10.411633   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:40:10.411731   21846 retry.go:31] will retry after 464.449304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:10.877292   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:40:10.929590   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	W0318 07:40:10.929690   21846 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	
	W0318 07:40:10.929709   21846 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:10.929723   21846 start.go:128] duration metric: took 6m2.830287047s to createHost
	I0318 07:40:10.929730   21846 start.go:83] releasing machines lock for "offline-docker-210000", held for 6m2.830390312s
	W0318 07:40:10.929745   21846 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0318 07:40:10.930195   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:10.978996   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:10.979059   21846 delete.go:82] Unable to get host status for offline-docker-210000, assuming it has already been deleted: state: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	W0318 07:40:10.979154   21846 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0318 07:40:10.979163   21846 start.go:728] Will try again in 5 seconds ...
	I0318 07:40:15.979563   21846 start.go:360] acquireMachinesLock for offline-docker-210000: {Name:mk579cee37e2a2f4db3f462cdf0a7cecd9ab2263 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:40:15.980479   21846 start.go:364] duration metric: took 854.266µs to acquireMachinesLock for "offline-docker-210000"
	I0318 07:40:15.980672   21846 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:40:15.980700   21846 fix.go:54] fixHost starting: 
	I0318 07:40:15.981226   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:16.033058   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:16.033103   21846 fix.go:112] recreateIfNeeded on offline-docker-210000: state= err=unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:16.033118   21846 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:40:16.054961   21846 out.go:177] * docker "offline-docker-210000" container is missing, will recreate.
	I0318 07:40:16.097501   21846 delete.go:124] DEMOLISHING offline-docker-210000 ...
	I0318 07:40:16.097678   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:16.147937   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	W0318 07:40:16.147998   21846 stop.go:83] unable to get state: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:16.148020   21846 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:16.148401   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:16.197450   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:16.197498   21846 delete.go:82] Unable to get host status for offline-docker-210000, assuming it has already been deleted: state: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:16.197584   21846 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-210000
	W0318 07:40:16.246862   21846 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-210000 returned with exit code 1
	I0318 07:40:16.246900   21846 kic.go:371] could not find the container offline-docker-210000 to remove it. will try anyways
	I0318 07:40:16.246970   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:16.295509   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	W0318 07:40:16.295554   21846 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:16.295630   21846 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-210000 /bin/bash -c "sudo init 0"
	W0318 07:40:16.344388   21846 cli_runner.go:211] docker exec --privileged -t offline-docker-210000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:40:16.344436   21846 oci.go:650] error shutdown offline-docker-210000: docker exec --privileged -t offline-docker-210000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:17.345373   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:17.397370   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:17.397422   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:17.397435   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:17.397462   21846 retry.go:31] will retry after 321.848794ms: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:17.719510   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:17.771801   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:17.771859   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:17.771872   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:17.771901   21846 retry.go:31] will retry after 1.098585059s: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:18.872888   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:18.926510   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:18.926556   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:18.926565   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:18.926593   21846 retry.go:31] will retry after 1.608536449s: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:20.535425   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:20.587030   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:20.587085   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:20.587095   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:20.587117   21846 retry.go:31] will retry after 931.663462ms: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:21.521081   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:21.573337   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:21.573395   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:21.573415   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:21.573439   21846 retry.go:31] will retry after 2.2580525s: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:23.832106   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:23.884536   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:23.884584   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:23.884596   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:23.884623   21846 retry.go:31] will retry after 3.908530008s: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:27.793326   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:27.890682   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:27.890730   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:27.890739   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:27.890762   21846 retry.go:31] will retry after 5.980636162s: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:33.872235   21846 cli_runner.go:164] Run: docker container inspect offline-docker-210000 --format={{.State.Status}}
	W0318 07:40:33.923519   21846 cli_runner.go:211] docker container inspect offline-docker-210000 --format={{.State.Status}} returned with exit code 1
	I0318 07:40:33.923571   21846 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:40:33.923581   21846 oci.go:664] temporary error: container offline-docker-210000 status is  but expect it to be exited
	I0318 07:40:33.923608   21846 oci.go:88] couldn't shut down offline-docker-210000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	 
	I0318 07:40:33.923689   21846 cli_runner.go:164] Run: docker rm -f -v offline-docker-210000
	I0318 07:40:33.973358   21846 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-210000
	W0318 07:40:34.022495   21846 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-210000 returned with exit code 1
	I0318 07:40:34.022605   21846 cli_runner.go:164] Run: docker network inspect offline-docker-210000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:40:34.072240   21846 cli_runner.go:164] Run: docker network rm offline-docker-210000
	I0318 07:40:34.173984   21846 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:40:35.174157   21846 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:40:35.196413   21846 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:40:35.196628   21846 start.go:159] libmachine.API.Create for "offline-docker-210000" (driver="docker")
	I0318 07:40:35.196658   21846 client.go:168] LocalClient.Create starting
	I0318 07:40:35.196881   21846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:40:35.196997   21846 main.go:141] libmachine: Decoding PEM data...
	I0318 07:40:35.197026   21846 main.go:141] libmachine: Parsing certificate...
	I0318 07:40:35.197111   21846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:40:35.197189   21846 main.go:141] libmachine: Decoding PEM data...
	I0318 07:40:35.197205   21846 main.go:141] libmachine: Parsing certificate...
	I0318 07:40:35.218522   21846 cli_runner.go:164] Run: docker network inspect offline-docker-210000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:40:35.271769   21846 cli_runner.go:211] docker network inspect offline-docker-210000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:40:35.271869   21846 network_create.go:281] running [docker network inspect offline-docker-210000] to gather additional debugging logs...
	I0318 07:40:35.271891   21846 cli_runner.go:164] Run: docker network inspect offline-docker-210000
	W0318 07:40:35.320875   21846 cli_runner.go:211] docker network inspect offline-docker-210000 returned with exit code 1
	I0318 07:40:35.320907   21846 network_create.go:284] error running [docker network inspect offline-docker-210000]: docker network inspect offline-docker-210000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-210000 not found
	I0318 07:40:35.320920   21846 network_create.go:286] output of [docker network inspect offline-docker-210000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-210000 not found
	
	** /stderr **
	I0318 07:40:35.321055   21846 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:40:35.372867   21846 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:40:35.374561   21846 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:40:35.376256   21846 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:40:35.377848   21846 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:40:35.379610   21846 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:40:35.381491   21846 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:40:35.382292   21846 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000986350}
	I0318 07:40:35.382320   21846 network_create.go:124] attempt to create docker network offline-docker-210000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0318 07:40:35.382447   21846 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-210000 offline-docker-210000
	I0318 07:40:35.468654   21846 network_create.go:108] docker network offline-docker-210000 192.168.103.0/24 created
	I0318 07:40:35.468696   21846 kic.go:121] calculated static IP "192.168.103.2" for the "offline-docker-210000" container
	I0318 07:40:35.468793   21846 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:40:35.519863   21846 cli_runner.go:164] Run: docker volume create offline-docker-210000 --label name.minikube.sigs.k8s.io=offline-docker-210000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:40:35.568794   21846 oci.go:103] Successfully created a docker volume offline-docker-210000
	I0318 07:40:35.568930   21846 cli_runner.go:164] Run: docker run --rm --name offline-docker-210000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-210000 --entrypoint /usr/bin/test -v offline-docker-210000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:40:35.870671   21846 oci.go:107] Successfully prepared a docker volume offline-docker-210000
	I0318 07:40:35.870708   21846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:40:35.870721   21846 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:40:35.870828   21846 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-210000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:46:35.195427   21846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:46:35.195552   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:35.248911   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:35.249026   21846 retry.go:31] will retry after 180.603599ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:35.430003   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:35.481914   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:35.482032   21846 retry.go:31] will retry after 504.243249ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:35.987012   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:36.040106   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:36.040211   21846 retry.go:31] will retry after 423.779531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:36.465275   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:36.518000   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	W0318 07:46:36.518112   21846 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	
	W0318 07:46:36.518138   21846 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:36.518195   21846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:46:36.518248   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:36.568873   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:36.568974   21846 retry.go:31] will retry after 145.009775ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:36.714522   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:36.765379   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:36.765500   21846 retry.go:31] will retry after 335.921337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:37.103437   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:37.156577   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:37.156675   21846 retry.go:31] will retry after 496.305264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:37.653429   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:37.706706   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:37.706806   21846 retry.go:31] will retry after 483.512988ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:38.191969   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:38.244965   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	W0318 07:46:38.245083   21846 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	
	W0318 07:46:38.245102   21846 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:38.245113   21846 start.go:128] duration metric: took 6m3.074346425s to createHost
	I0318 07:46:38.245174   21846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:46:38.245235   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:38.294394   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:38.294491   21846 retry.go:31] will retry after 214.513634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:38.510339   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:38.562027   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:38.562124   21846 retry.go:31] will retry after 222.62588ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:38.785238   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:38.837585   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:38.837686   21846 retry.go:31] will retry after 510.803866ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:39.350869   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:39.402362   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	W0318 07:46:39.402461   21846 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	
	W0318 07:46:39.402489   21846 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:39.402545   21846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:46:39.402602   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:39.451833   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:39.451927   21846 retry.go:31] will retry after 343.209404ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:39.797508   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:39.851298   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:39.851393   21846 retry.go:31] will retry after 453.560185ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:40.305441   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:40.356667   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	I0318 07:46:40.356769   21846 retry.go:31] will retry after 387.144209ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:40.745335   21846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000
	W0318 07:46:40.796352   21846 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000 returned with exit code 1
	W0318 07:46:40.796461   21846 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	
	W0318 07:46:40.796485   21846 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-210000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-210000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000
	I0318 07:46:40.796501   21846 fix.go:56] duration metric: took 6m24.819423609s for fixHost
	I0318 07:46:40.796508   21846 start.go:83] releasing machines lock for "offline-docker-210000", held for 6m24.819494088s
	W0318 07:46:40.796593   21846 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-210000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-210000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 07:46:40.840190   21846 out.go:177] 
	W0318 07:46:40.861102   21846 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 07:46:40.861198   21846 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0318 07:46:40.861247   21846 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0318 07:46:40.904041   21846 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-210000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-03-18 07:46:40.981199 -0700 PDT m=+6230.603664019
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-210000
helpers_test.go:235: (dbg) docker inspect offline-docker-210000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-210000",
	        "Id": "dccefc092db577666277f090b1edfac61aac6b42ea542059f223b472f345d804",
	        "Created": "2024-03-18T14:40:35.429277702Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-210000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-210000 -n offline-docker-210000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-210000 -n offline-docker-210000: exit status 7 (114.888996ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:46:41.147643   22350 status.go:249] status error: host: state: unknown state "offline-docker-210000": docker container inspect offline-docker-210000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-210000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-210000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-210000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-210000
--- FAIL: TestOffline (754.82s)

                                                
                                    
x
+
TestCertOptions (7200.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-202000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (3m40s)
	TestCertOptions (2m36s)
	TestNetworkPlugins (28m43s)

                                                
                                                
goroutine 2421 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00072a000, 0xc0012bbbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007f4390, {0xde43240, 0x2a, 0x2a}, {0x9afcbc5?, 0xb58a9e9?, 0xde655c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0009f2640)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0009f2640)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006c2c80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2109 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0023084e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0023084e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0023084e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0023084e0, 0xc00296a500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 807 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002625850, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc64e240?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021cf860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002625880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0024793c0, {0xcb3e4c0, 0xc0021196b0}, 0x1, 0xc000184180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0024793c0, 0x3b9aca00, 0x0, 0x1, 0xc000184180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 819
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2420 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0020d2420, 0xc0009baa80)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 515
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 53 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 52
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 515 [syscall, 2 minutes]:
syscall.syscall6(0xc000a37f80?, 0x1000000000010?, 0x10000000019?, 0x556f55b8?, 0x90?, 0xe7425b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0022718a0?, 0x9a3d165?, 0x90?, 0xcaa2120?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x9b6df05?, 0xc0022718d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0009c0480)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0020d2420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0020d2420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002289860, 0xc0020d2420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc002289860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc002289860, 0xcb322f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2419 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5576e248, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00286c660?, 0xc0021d7200?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00286c660, {0xc0021d7200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00229c188, {0xc0021d7200?, 0x554c44a8?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a365d0, {0xcb3ced8, 0xc0007924b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xcb3d018, 0xc000a365d0}, {0xcb3ced8, 0xc0007924b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xdd78860?, {0xcb3d018, 0xc000a365d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0xcb3d018?, 0xc000a365d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xcb3d018, 0xc000a365d0}, {0xcb3cf98, 0xc00229c188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xcb32368?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 515
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 516 [syscall, 3 minutes]:
syscall.syscall6(0xc000a37f80?, 0x1000000000010?, 0x10000000019?, 0x557fcf18?, 0x90?, 0xe7425b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0012bba40?, 0x9a3d165?, 0x90?, 0xcaa2120?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x9b6df05?, 0xc0012bba74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0009c06c0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0020d29a0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0020d29a0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002289a00, 0xc0020d29a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc002289a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc002289a00, 0xcb322f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1119 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc002136840, 0xc002b05c80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1118
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 154 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008fea40, 0xc000184180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 153 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020a23c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 159 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 158
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 157 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0008fea10, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc64e240?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020a22a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008fea40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000148d50, {0xcb3e4c0, 0xc0012fdf50}, 0x1, 0xc000184180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000148d50, 0x3b9aca00, 0x0, 0x1, 0xc000184180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2021 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0022884e0, {0xb5326a5?, 0xc5b367b07a2?}, 0xc0025d20d8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0022884e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0022884e0, 0xcb323d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 158 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xcb60e20, 0xc000184180}, 0xc00018f750, 0xc00099ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xcb60e20, 0xc000184180}, 0x0?, 0xc00018f750, 0xc00018f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xcb60e20?, 0xc000184180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00018f7d0?, 0xa037d85?, 0xc0020a23c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2078 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002288340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002288340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002288340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc002288340, 0xcb32400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2096 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002288000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002288000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc002288000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc002288000, 0xcb32420)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2079 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022889c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022889c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0022889c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0022889c0, 0xcb32428)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2023 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002288820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002288820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002288820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002288820, 0xcb323f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2409 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x5576e530, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00286cea0?, 0xc0021d6200?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00286cea0, {0xc0021d6200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000792790, {0xc0021d6200?, 0xc000580008?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a368d0, {0xcb3ced8, 0xc00229c0a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xcb3d018, 0xc000a368d0}, {0xcb3ced8, 0xc00229c0a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023a0f98?, {0xcb3d018, 0xc000a368d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0023a0f38?, {0xcb3d018?, 0xc000a368d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xcb3d018, 0xc000a368d0}, {0xcb3cf98, 0xc000792790}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0009ba3c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 516
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 808 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xcb60e20, 0xc000184180}, 0xc000092f50, 0xc0020bcf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xcb60e20, 0xc000184180}, 0x58?, 0xc000092f50, 0xc000092f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xcb60e20?, 0xc000184180?}, 0xc00072bd40?, 0x9b70bc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000092fd0?, 0x9bb6ec4?, 0xc002625480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 819
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 586 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0x5576eb00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00236c400?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00236c400)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00236c400)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0025869e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0025869e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00086a0f0, {0xcb54780, 0xc0025869e0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc00086a0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0026ba340?, 0xc0026baea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 583
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1730 [syscall, 96 minutes]:
syscall.syscall(0x0?, 0xc000011da0?, 0xc001fea6f0?, 0x9add05d?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc000011c98?, 0xc000705880?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1708
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2080 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002288b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002288b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002288b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc002288b60, 0xcb323a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1204 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023ce000, 0xc002555f20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 684
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2108 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002308340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002308340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002308340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002308340, 0xc00296a480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 818 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021cf980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 729
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2113 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002288d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002288d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc002288d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc002288d00, 0xcb323b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2101 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a121a0, 0xc0025d20d8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2021
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 809 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 808
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2106 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a13d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a13d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a13d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a13d40, 0xc00296a380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2103 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a12d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a12d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a12d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a12d00, 0xc00296a200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2104 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a13a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a13a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a13a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a13a00, 0xc00296a280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2110 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002308680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002308680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002308680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002308680, 0xc00296a580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 819 [chan receive, 110 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002625880, 0xc000184180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 729
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2105 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a13ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a13ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a13ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a13ba0, 0xc00296a300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2102 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a12820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a12820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a12820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a12820, 0xc00296a080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 909 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0xc00248fa20, 0xc002285200)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 908
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1206 [select, 108 minutes]:
net/http.(*persistConn).readLoop(0xc002505320)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1194
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2418 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5576ea08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00286c5a0?, 0xc0023eea91?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00286c5a0, {0xc0023eea91, 0x56f, 0x56f})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00229c160, {0xc0023eea91?, 0xc00250aa80?, 0x227?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a365a0, {0xcb3ced8, 0xc000792490})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xcb3d018, 0xc000a365a0}, {0xcb3ced8, 0xc000792490}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023a1678?, {0xcb3d018, 0xc000a365a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0023a1738?, {0xcb3d018?, 0xc000a365a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xcb3d018, 0xc000a365a0}, {0xcb3cf98, 0xc00229c160}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002810240?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 515
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1207 [select, 108 minutes]:
net/http.(*persistConn).writeLoop(0xc002505320)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1194
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2022 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002288680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002288680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002288680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002288680, 0xcb323e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2107 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006e0a00)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0023081a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0023081a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0023081a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0023081a0, 0xc00296a400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2410 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0020d29a0, 0xc002810480)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 516
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1095 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc000893760, 0xc002b04c60)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1094
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1715 [syscall, 96 minutes]:
syscall.syscall(0x0?, 0xc0025d34d0?, 0xc0023a26f0?, 0x9add05d?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0025d3398?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1708
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2408 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x5576e150, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00286cde0?, 0xc0012afa9a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00286cde0, {0xc0012afa9a, 0x566, 0x566})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000792748, {0xc0012afa9a?, 0xc000804a80?, 0x230?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a368a0, {0xcb3ced8, 0xc00229c0a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xcb3d018, 0xc000a368a0}, {0xcb3ced8, 0xc00229c0a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00018e678?, {0xcb3d018, 0xc000a368a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00018e738?, {0xcb3d018?, 0xc000a368a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xcb3d018, 0xc000a368a0}, {0xcb3cf98, 0xc000792748}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0028103c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 516
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                    
x
+
TestDockerFlags (757.55s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-263000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0318 07:48:03.097877   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:48:47.311149   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:52:46.191349   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:53:03.134925   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:53:47.351159   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:58:03.132078   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:58:30.399615   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:58:47.346539   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-263000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m36.198140776s)

                                                
                                                
-- stdout --
	* [docker-flags-263000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-263000" primary control-plane node in "docker-flags-263000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-263000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:47:36.583029   22491 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:47:36.583297   22491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:47:36.583303   22491 out.go:304] Setting ErrFile to fd 2...
	I0318 07:47:36.583308   22491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:47:36.583470   22491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:47:36.584977   22491 out.go:298] Setting JSON to false
	I0318 07:47:36.607659   22491 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":15429,"bootTime":1710757827,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 07:47:36.607749   22491 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 07:47:36.629607   22491 out.go:177] * [docker-flags-263000] minikube v1.32.0 on Darwin 14.3.1
	I0318 07:47:36.672176   22491 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 07:47:36.672267   22491 notify.go:220] Checking for updates...
	I0318 07:47:36.715064   22491 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 07:47:36.737172   22491 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 07:47:36.758076   22491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 07:47:36.779313   22491 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 07:47:36.800986   22491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 07:47:36.823110   22491 config.go:182] Loaded profile config "force-systemd-flag-529000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:47:36.823298   22491 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 07:47:36.879092   22491 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 07:47:36.879257   22491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:47:36.981600   22491 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:15 ContainersRunning:2 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:124 OomKillDisable:false NGoroutines:250 SystemTime:2024-03-18 14:47:36.971184649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:47:37.024508   22491 out.go:177] * Using the docker driver based on user configuration
	I0318 07:47:37.046384   22491 start.go:297] selected driver: docker
	I0318 07:47:37.046410   22491 start.go:901] validating driver "docker" against <nil>
	I0318 07:47:37.046427   22491 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 07:47:37.050778   22491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:47:37.152130   22491 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:15 ContainersRunning:2 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:124 OomKillDisable:false NGoroutines:250 SystemTime:2024-03-18 14:47:37.14167693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:47:37.152305   22491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 07:47:37.152497   22491 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0318 07:47:37.173442   22491 out.go:177] * Using Docker Desktop driver with root privileges
	I0318 07:47:37.196570   22491 cni.go:84] Creating CNI manager for ""
	I0318 07:47:37.196616   22491 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 07:47:37.196630   22491 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 07:47:37.196732   22491 start.go:340] cluster config:
	{Name:docker-flags-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:47:37.218135   22491 out.go:177] * Starting "docker-flags-263000" primary control-plane node in "docker-flags-263000" cluster
	I0318 07:47:37.260545   22491 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 07:47:37.282060   22491 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 07:47:37.324234   22491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:47:37.324267   22491 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 07:47:37.324289   22491 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 07:47:37.324306   22491 cache.go:56] Caching tarball of preloaded images
	I0318 07:47:37.324445   22491 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 07:47:37.324458   22491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 07:47:37.325118   22491 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/docker-flags-263000/config.json ...
	I0318 07:47:37.325322   22491 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/docker-flags-263000/config.json: {Name:mk0d2a56435c7decaa734c7de60d4ccaded1780a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 07:47:37.375084   22491 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 07:47:37.375105   22491 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 07:47:37.375138   22491 cache.go:194] Successfully downloaded all kic artifacts
	I0318 07:47:37.375203   22491 start.go:360] acquireMachinesLock for docker-flags-263000: {Name:mk0547e4988af819c75894de0906eca61949a736 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:47:37.375350   22491 start.go:364] duration metric: took 132.945µs to acquireMachinesLock for "docker-flags-263000"
	I0318 07:47:37.375395   22491 start.go:93] Provisioning new machine with config: &{Name:docker-flags-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-263000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 07:47:37.375482   22491 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:47:37.418423   22491 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:47:37.418796   22491 start.go:159] libmachine.API.Create for "docker-flags-263000" (driver="docker")
	I0318 07:47:37.418850   22491 client.go:168] LocalClient.Create starting
	I0318 07:47:37.419069   22491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:47:37.419165   22491 main.go:141] libmachine: Decoding PEM data...
	I0318 07:47:37.419198   22491 main.go:141] libmachine: Parsing certificate...
	I0318 07:47:37.419310   22491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:47:37.419379   22491 main.go:141] libmachine: Decoding PEM data...
	I0318 07:47:37.419395   22491 main.go:141] libmachine: Parsing certificate...
	I0318 07:47:37.420360   22491 cli_runner.go:164] Run: docker network inspect docker-flags-263000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:47:37.471584   22491 cli_runner.go:211] docker network inspect docker-flags-263000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:47:37.471690   22491 network_create.go:281] running [docker network inspect docker-flags-263000] to gather additional debugging logs...
	I0318 07:47:37.471713   22491 cli_runner.go:164] Run: docker network inspect docker-flags-263000
	W0318 07:47:37.520910   22491 cli_runner.go:211] docker network inspect docker-flags-263000 returned with exit code 1
	I0318 07:47:37.520941   22491 network_create.go:284] error running [docker network inspect docker-flags-263000]: docker network inspect docker-flags-263000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-263000 not found
	I0318 07:47:37.520952   22491 network_create.go:286] output of [docker network inspect docker-flags-263000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-263000 not found
	
	** /stderr **
	I0318 07:47:37.521077   22491 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:47:37.572187   22491 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:47:37.573823   22491 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:47:37.575186   22491 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:47:37.576560   22491 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:47:37.576929   22491 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00224eca0}
	I0318 07:47:37.576945   22491 network_create.go:124] attempt to create docker network docker-flags-263000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0318 07:47:37.577017   22491 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-263000 docker-flags-263000
	W0318 07:47:37.627621   22491 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-263000 docker-flags-263000 returned with exit code 1
	W0318 07:47:37.627656   22491 network_create.go:149] failed to create docker network docker-flags-263000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-263000 docker-flags-263000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0318 07:47:37.627674   22491 network_create.go:116] failed to create docker network docker-flags-263000 192.168.85.0/24, will retry: subnet is taken
	I0318 07:47:37.629286   22491 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:47:37.629648   22491 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022bd130}
	I0318 07:47:37.629659   22491 network_create.go:124] attempt to create docker network docker-flags-263000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0318 07:47:37.629729   22491 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-263000 docker-flags-263000
	I0318 07:47:37.716279   22491 network_create.go:108] docker network docker-flags-263000 192.168.94.0/24 created
	I0318 07:47:37.716317   22491 kic.go:121] calculated static IP "192.168.94.2" for the "docker-flags-263000" container
	I0318 07:47:37.716424   22491 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:47:37.768552   22491 cli_runner.go:164] Run: docker volume create docker-flags-263000 --label name.minikube.sigs.k8s.io=docker-flags-263000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:47:37.840278   22491 oci.go:103] Successfully created a docker volume docker-flags-263000
	I0318 07:47:37.840415   22491 cli_runner.go:164] Run: docker run --rm --name docker-flags-263000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-263000 --entrypoint /usr/bin/test -v docker-flags-263000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:47:38.198570   22491 oci.go:107] Successfully prepared a docker volume docker-flags-263000
	I0318 07:47:38.198610   22491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:47:38.198622   22491 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:47:38.198715   22491 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-263000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:53:37.459135   22491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:53:37.459281   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:37.511497   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:37.511632   22491 retry.go:31] will retry after 174.948704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:37.688566   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:37.740826   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:37.740935   22491 retry.go:31] will retry after 192.373993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:37.934525   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:37.985791   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:37.985907   22491 retry.go:31] will retry after 345.09076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:38.331578   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:38.385355   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:38.385460   22491 retry.go:31] will retry after 943.283994ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:39.331141   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:39.384076   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	W0318 07:53:39.384172   22491 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	
	W0318 07:53:39.384197   22491 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:39.384253   22491 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:53:39.384312   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:39.434346   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:39.434434   22491 retry.go:31] will retry after 267.620721ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:39.704499   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:39.757941   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:39.758027   22491 retry.go:31] will retry after 362.693515ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:40.122631   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:40.175290   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 07:53:40.175398   22491 retry.go:31] will retry after 725.320506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:40.902502   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 07:53:40.955323   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	W0318 07:53:40.955425   22491 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	
	W0318 07:53:40.955442   22491 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:40.955457   22491 start.go:128] duration metric: took 6m3.542137349s to createHost
	I0318 07:53:40.955464   22491 start.go:83] releasing machines lock for "docker-flags-263000", held for 6m3.542280129s
	W0318 07:53:40.955478   22491 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0318 07:53:40.955919   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:41.005933   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:41.005997   22491 delete.go:82] Unable to get host status for docker-flags-263000, assuming it has already been deleted: state: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	W0318 07:53:41.006085   22491 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0318 07:53:41.006095   22491 start.go:728] Will try again in 5 seconds ...
	I0318 07:53:46.008225   22491 start.go:360] acquireMachinesLock for docker-flags-263000: {Name:mk0547e4988af819c75894de0906eca61949a736 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:53:46.008520   22491 start.go:364] duration metric: took 247.209µs to acquireMachinesLock for "docker-flags-263000"
	I0318 07:53:46.008560   22491 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:53:46.008575   22491 fix.go:54] fixHost starting: 
	I0318 07:53:46.009101   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:46.061207   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:46.061256   22491 fix.go:112] recreateIfNeeded on docker-flags-263000: state= err=unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:46.061273   22491 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:53:46.083080   22491 out.go:177] * docker "docker-flags-263000" container is missing, will recreate.
	I0318 07:53:46.126576   22491 delete.go:124] DEMOLISHING docker-flags-263000 ...
	I0318 07:53:46.126759   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:46.177351   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	W0318 07:53:46.177413   22491 stop.go:83] unable to get state: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:46.177432   22491 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:46.177794   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:46.226860   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:46.226921   22491 delete.go:82] Unable to get host status for docker-flags-263000, assuming it has already been deleted: state: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:46.227008   22491 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-263000
	W0318 07:53:46.276506   22491 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-263000 returned with exit code 1
	I0318 07:53:46.276549   22491 kic.go:371] could not find the container docker-flags-263000 to remove it. will try anyways
	I0318 07:53:46.276625   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:46.328501   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	W0318 07:53:46.328544   22491 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:46.328632   22491 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-263000 /bin/bash -c "sudo init 0"
	W0318 07:53:46.377575   22491 cli_runner.go:211] docker exec --privileged -t docker-flags-263000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:53:46.377609   22491 oci.go:650] error shutdown docker-flags-263000: docker exec --privileged -t docker-flags-263000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:47.379683   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:47.431778   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:47.431825   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:47.431838   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:53:47.431865   22491 retry.go:31] will retry after 695.692856ms: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:48.128008   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:48.180942   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:48.180988   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:48.181002   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:53:48.181027   22491 retry.go:31] will retry after 1.115558847s: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:49.297885   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:49.352444   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:49.352506   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:49.352514   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:53:49.352538   22491 retry.go:31] will retry after 1.184208044s: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:50.537831   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:50.591340   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:50.591386   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:50.591401   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:53:50.591427   22491 retry.go:31] will retry after 1.957955744s: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:52.549589   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:52.601441   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:52.601486   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:52.601496   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:53:52.601519   22491 retry.go:31] will retry after 3.560520177s: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:56.163872   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:53:56.216100   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:56.216147   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:53:56.216157   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:53:56.216182   22491 retry.go:31] will retry after 5.43203425s: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:54:01.649080   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:54:01.701901   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:54:01.701948   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:54:01.701960   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:54:01.701983   22491 retry.go:31] will retry after 3.384401536s: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:54:05.087132   22491 cli_runner.go:164] Run: docker container inspect docker-flags-263000 --format={{.State.Status}}
	W0318 07:54:05.139755   22491 cli_runner.go:211] docker container inspect docker-flags-263000 --format={{.State.Status}} returned with exit code 1
	I0318 07:54:05.139801   22491 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 07:54:05.139812   22491 oci.go:664] temporary error: container docker-flags-263000 status is  but expect it to be exited
	I0318 07:54:05.139846   22491 oci.go:88] couldn't shut down docker-flags-263000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	 
	I0318 07:54:05.139931   22491 cli_runner.go:164] Run: docker rm -f -v docker-flags-263000
	I0318 07:54:05.191265   22491 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-263000
	W0318 07:54:05.240771   22491 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-263000 returned with exit code 1
	I0318 07:54:05.240876   22491 cli_runner.go:164] Run: docker network inspect docker-flags-263000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:54:05.290829   22491 cli_runner.go:164] Run: docker network rm docker-flags-263000
	I0318 07:54:05.403747   22491 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:54:06.404994   22491 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:54:06.428396   22491 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:54:06.428570   22491 start.go:159] libmachine.API.Create for "docker-flags-263000" (driver="docker")
	I0318 07:54:06.428599   22491 client.go:168] LocalClient.Create starting
	I0318 07:54:06.428823   22491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:54:06.428909   22491 main.go:141] libmachine: Decoding PEM data...
	I0318 07:54:06.428932   22491 main.go:141] libmachine: Parsing certificate...
	I0318 07:54:06.429008   22491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:54:06.429075   22491 main.go:141] libmachine: Decoding PEM data...
	I0318 07:54:06.429100   22491 main.go:141] libmachine: Parsing certificate...
	I0318 07:54:06.429667   22491 cli_runner.go:164] Run: docker network inspect docker-flags-263000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:54:06.482820   22491 cli_runner.go:211] docker network inspect docker-flags-263000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:54:06.482919   22491 network_create.go:281] running [docker network inspect docker-flags-263000] to gather additional debugging logs...
	I0318 07:54:06.482937   22491 cli_runner.go:164] Run: docker network inspect docker-flags-263000
	W0318 07:54:06.532291   22491 cli_runner.go:211] docker network inspect docker-flags-263000 returned with exit code 1
	I0318 07:54:06.532322   22491 network_create.go:284] error running [docker network inspect docker-flags-263000]: docker network inspect docker-flags-263000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-263000 not found
	I0318 07:54:06.532335   22491 network_create.go:286] output of [docker network inspect docker-flags-263000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-263000 not found
	
	** /stderr **
	I0318 07:54:06.532457   22491 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:54:06.584190   22491 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.585928   22491 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.587465   22491 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.589032   22491 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.590634   22491 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.592228   22491 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.593848   22491 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:54:06.594184   22491 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f4d70}
	I0318 07:54:06.594195   22491 network_create.go:124] attempt to create docker network docker-flags-263000 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 65535 ...
	I0318 07:54:06.594257   22491 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-263000 docker-flags-263000
	I0318 07:54:06.694091   22491 network_create.go:108] docker network docker-flags-263000 192.168.112.0/24 created
	I0318 07:54:06.694127   22491 kic.go:121] calculated static IP "192.168.112.2" for the "docker-flags-263000" container
	I0318 07:54:06.694230   22491 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:54:06.746133   22491 cli_runner.go:164] Run: docker volume create docker-flags-263000 --label name.minikube.sigs.k8s.io=docker-flags-263000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:54:06.795648   22491 oci.go:103] Successfully created a docker volume docker-flags-263000
	I0318 07:54:06.795780   22491 cli_runner.go:164] Run: docker run --rm --name docker-flags-263000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-263000 --entrypoint /usr/bin/test -v docker-flags-263000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:54:07.107434   22491 oci.go:107] Successfully prepared a docker volume docker-flags-263000
	I0318 07:54:07.107467   22491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:54:07.107481   22491 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:54:07.107577   22491 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-263000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 08:00:06.426969   22491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 08:00:06.427069   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:06.480165   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:06.480281   22491 retry.go:31] will retry after 136.535926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:06.617547   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:06.690032   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:06.690137   22491 retry.go:31] will retry after 362.281663ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:07.054804   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:07.105957   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:07.106067   22491 retry.go:31] will retry after 369.385886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:07.477824   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:07.528851   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:07.528959   22491 retry.go:31] will retry after 618.899312ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:08.149482   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:08.201828   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	W0318 08:00:08.201931   22491 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	
	W0318 08:00:08.201958   22491 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:08.202041   22491 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 08:00:08.202100   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:08.250920   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:08.251029   22491 retry.go:31] will retry after 306.043227ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:08.558218   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:08.610888   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:08.610986   22491 retry.go:31] will retry after 467.960561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:09.081316   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:09.134734   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:09.134828   22491 retry.go:31] will retry after 486.021078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:09.621550   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:09.675953   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	W0318 08:00:09.676063   22491 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	
	W0318 08:00:09.676082   22491 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:09.676097   22491 start.go:128] duration metric: took 6m3.274420521s to createHost
	I0318 08:00:09.676164   22491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 08:00:09.676219   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:09.726243   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:09.726334   22491 retry.go:31] will retry after 371.781473ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:10.098899   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:10.152867   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:10.152968   22491 retry.go:31] will retry after 374.698882ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:10.528415   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:10.582816   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:10.582908   22491 retry.go:31] will retry after 492.861534ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:11.076406   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:11.128495   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	W0318 08:00:11.128603   22491 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	
	W0318 08:00:11.128617   22491 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:11.128677   22491 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 08:00:11.128735   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:11.178592   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:11.178685   22491 retry.go:31] will retry after 140.480502ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:11.321575   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:11.374381   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:11.374474   22491 retry.go:31] will retry after 307.649739ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:11.683970   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:11.737419   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	I0318 08:00:11.737510   22491 retry.go:31] will retry after 813.577248ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:12.553451   22491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000
	W0318 08:00:12.605572   22491 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000 returned with exit code 1
	W0318 08:00:12.605673   22491 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	
	W0318 08:00:12.605691   22491 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-263000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-263000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	I0318 08:00:12.605720   22491 fix.go:56] duration metric: took 6m26.600764861s for fixHost
	I0318 08:00:12.605728   22491 start.go:83] releasing machines lock for "docker-flags-263000", held for 6m26.600829922s
	W0318 08:00:12.605804   22491 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-263000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-263000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 08:00:12.648436   22491 out.go:177] 
	W0318 08:00:12.670071   22491 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 08:00:12.670148   22491 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0318 08:00:12.670190   22491 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0318 08:00:12.691388   22491 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-263000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-263000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-263000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (238.211144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-263000 host status: state: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-263000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-263000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-263000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (203.876064ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-263000 host status: state: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-263000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-263000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-18 08:00:13.209155 -0700 PDT m=+7042.798012859
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-263000
helpers_test.go:235: (dbg) docker inspect docker-flags-263000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-263000",
	        "Id": "1206c076b7cde015e3a2c0547c0212358e5f8636da61f587362641324348ff9a",
	        "Created": "2024-03-18T14:54:06.654868125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.112.0/24",
	                    "Gateway": "192.168.112.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-263000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-263000 -n docker-flags-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-263000 -n docker-flags-263000: exit status 7 (113.43332ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 08:00:13.373983   22920 status.go:249] status error: host: state: unknown state "docker-flags-263000": docker container inspect docker-flags-263000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-263000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-263000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-263000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-263000
--- FAIL: TestDockerFlags (757.55s)

                                                
                                    
x
+
TestForceSystemdFlag (748.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-529000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-529000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m27.571851559s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-529000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-529000" primary control-plane node in "force-systemd-flag-529000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-529000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:46:41.945155   22374 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:46:41.945440   22374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:46:41.945446   22374 out.go:304] Setting ErrFile to fd 2...
	I0318 07:46:41.945449   22374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:46:41.945632   22374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:46:41.947160   22374 out.go:298] Setting JSON to false
	I0318 07:46:41.969592   22374 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":15374,"bootTime":1710757827,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 07:46:41.969692   22374 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 07:46:41.991036   22374 out.go:177] * [force-systemd-flag-529000] minikube v1.32.0 on Darwin 14.3.1
	I0318 07:46:42.055516   22374 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 07:46:42.033869   22374 notify.go:220] Checking for updates...
	I0318 07:46:42.097798   22374 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 07:46:42.119556   22374 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 07:46:42.140829   22374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 07:46:42.161869   22374 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 07:46:42.183628   22374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 07:46:42.205809   22374 config.go:182] Loaded profile config "force-systemd-env-793000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:46:42.205992   22374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 07:46:42.261912   22374 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 07:46:42.262078   22374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:46:42.361474   22374 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:2 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:121 OomKillDisable:false NGoroutines:240 SystemTime:2024-03-18 14:46:42.351599655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:46:42.404698   22374 out.go:177] * Using the docker driver based on user configuration
	I0318 07:46:42.425501   22374 start.go:297] selected driver: docker
	I0318 07:46:42.425525   22374 start.go:901] validating driver "docker" against <nil>
	I0318 07:46:42.425552   22374 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 07:46:42.429820   22374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:46:42.527906   22374 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:2 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:121 OomKillDisable:false NGoroutines:240 SystemTime:2024-03-18 14:46:42.518616708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:46:42.528094   22374 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 07:46:42.528291   22374 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 07:46:42.549654   22374 out.go:177] * Using Docker Desktop driver with root privileges
	I0318 07:46:42.571557   22374 cni.go:84] Creating CNI manager for ""
	I0318 07:46:42.571602   22374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 07:46:42.571623   22374 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 07:46:42.571756   22374 start.go:340] cluster config:
	{Name:force-systemd-flag-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:46:42.593322   22374 out.go:177] * Starting "force-systemd-flag-529000" primary control-plane node in "force-systemd-flag-529000" cluster
	I0318 07:46:42.635343   22374 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 07:46:42.656249   22374 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 07:46:42.698386   22374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:46:42.698434   22374 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 07:46:42.698488   22374 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 07:46:42.698548   22374 cache.go:56] Caching tarball of preloaded images
	I0318 07:46:42.698822   22374 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 07:46:42.698843   22374 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 07:46:42.699748   22374 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/force-systemd-flag-529000/config.json ...
	I0318 07:46:42.699863   22374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/force-systemd-flag-529000/config.json: {Name:mk8e07ddac42a3ff3fa138d8109624d54ae537c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 07:46:42.749383   22374 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 07:46:42.749421   22374 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 07:46:42.749462   22374 cache.go:194] Successfully downloaded all kic artifacts
	I0318 07:46:42.749504   22374 start.go:360] acquireMachinesLock for force-systemd-flag-529000: {Name:mk0d4a6556f40cbe2aa31a07012baf742c4c7562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:46:42.749660   22374 start.go:364] duration metric: took 144.658µs to acquireMachinesLock for "force-systemd-flag-529000"
	I0318 07:46:42.749684   22374 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-529000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 07:46:42.749731   22374 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:46:42.792430   22374 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:46:42.792807   22374 start.go:159] libmachine.API.Create for "force-systemd-flag-529000" (driver="docker")
	I0318 07:46:42.792871   22374 client.go:168] LocalClient.Create starting
	I0318 07:46:42.793064   22374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:46:42.793159   22374 main.go:141] libmachine: Decoding PEM data...
	I0318 07:46:42.793197   22374 main.go:141] libmachine: Parsing certificate...
	I0318 07:46:42.793305   22374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:46:42.793384   22374 main.go:141] libmachine: Decoding PEM data...
	I0318 07:46:42.793401   22374 main.go:141] libmachine: Parsing certificate...
	I0318 07:46:42.794313   22374 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:46:42.885785   22374 cli_runner.go:211] docker network inspect force-systemd-flag-529000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:46:42.885913   22374 network_create.go:281] running [docker network inspect force-systemd-flag-529000] to gather additional debugging logs...
	I0318 07:46:42.885928   22374 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529000
	W0318 07:46:42.934797   22374 cli_runner.go:211] docker network inspect force-systemd-flag-529000 returned with exit code 1
	I0318 07:46:42.934828   22374 network_create.go:284] error running [docker network inspect force-systemd-flag-529000]: docker network inspect force-systemd-flag-529000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-529000 not found
	I0318 07:46:42.934847   22374 network_create.go:286] output of [docker network inspect force-systemd-flag-529000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-529000 not found
	
	** /stderr **
	I0318 07:46:42.934988   22374 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:46:42.987500   22374 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:46:42.989068   22374 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:46:42.990666   22374 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:46:42.991055   22374 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002309b20}
	I0318 07:46:42.991071   22374 network_create.go:124] attempt to create docker network force-systemd-flag-529000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0318 07:46:42.991142   22374 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-529000 force-systemd-flag-529000
	I0318 07:46:43.076656   22374 network_create.go:108] docker network force-systemd-flag-529000 192.168.76.0/24 created
	I0318 07:46:43.076702   22374 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-529000" container
	I0318 07:46:43.076798   22374 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:46:43.128223   22374 cli_runner.go:164] Run: docker volume create force-systemd-flag-529000 --label name.minikube.sigs.k8s.io=force-systemd-flag-529000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:46:43.178898   22374 oci.go:103] Successfully created a docker volume force-systemd-flag-529000
	I0318 07:46:43.179009   22374 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-529000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-529000 --entrypoint /usr/bin/test -v force-systemd-flag-529000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:46:43.562607   22374 oci.go:107] Successfully prepared a docker volume force-systemd-flag-529000
	I0318 07:46:43.562645   22374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:46:43.562658   22374 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:46:43.562752   22374 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-529000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:52:42.833112   22374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:52:42.833256   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:42.886251   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:42.886384   22374 retry.go:31] will retry after 324.037179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:43.212735   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:43.265982   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:43.266102   22374 retry.go:31] will retry after 515.157749ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:43.782156   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:43.834043   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:43.834147   22374 retry.go:31] will retry after 480.786836ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:44.317458   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:44.370402   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	W0318 07:52:44.370504   22374 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	
	W0318 07:52:44.370527   22374 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:44.370584   22374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:52:44.370654   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:44.420331   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:44.420445   22374 retry.go:31] will retry after 250.32393ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:44.671702   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:44.722589   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:44.722685   22374 retry.go:31] will retry after 415.15412ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:45.140184   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:45.194171   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:45.194263   22374 retry.go:31] will retry after 415.990068ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:45.610534   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:52:45.660516   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	W0318 07:52:45.660634   22374 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	
	W0318 07:52:45.660653   22374 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:45.660667   22374 start.go:128] duration metric: took 6m2.87309728s to createHost
	I0318 07:52:45.660675   22374 start.go:83] releasing machines lock for "force-systemd-flag-529000", held for 6m2.873176986s
	W0318 07:52:45.660690   22374 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0318 07:52:45.661097   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:45.709896   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:45.709949   22374 delete.go:82] Unable to get host status for force-systemd-flag-529000, assuming it has already been deleted: state: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	W0318 07:52:45.710022   22374 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0318 07:52:45.710034   22374 start.go:728] Will try again in 5 seconds ...
	I0318 07:52:50.711092   22374 start.go:360] acquireMachinesLock for force-systemd-flag-529000: {Name:mk0d4a6556f40cbe2aa31a07012baf742c4c7562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:52:50.711496   22374 start.go:364] duration metric: took 152.417µs to acquireMachinesLock for "force-systemd-flag-529000"
	I0318 07:52:50.711530   22374 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:52:50.711548   22374 fix.go:54] fixHost starting: 
	I0318 07:52:50.712015   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:50.764522   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:50.764574   22374 fix.go:112] recreateIfNeeded on force-systemd-flag-529000: state= err=unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:50.764594   22374 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:52:50.786322   22374 out.go:177] * docker "force-systemd-flag-529000" container is missing, will recreate.
	I0318 07:52:50.828968   22374 delete.go:124] DEMOLISHING force-systemd-flag-529000 ...
	I0318 07:52:50.829158   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:50.880691   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	W0318 07:52:50.880734   22374 stop.go:83] unable to get state: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:50.880758   22374 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:50.881108   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:50.930530   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:50.930607   22374 delete.go:82] Unable to get host status for force-systemd-flag-529000, assuming it has already been deleted: state: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:50.930698   22374 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-529000
	W0318 07:52:50.980358   22374 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-529000 returned with exit code 1
	I0318 07:52:50.980406   22374 kic.go:371] could not find the container force-systemd-flag-529000 to remove it. will try anyways
	I0318 07:52:50.980485   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:51.030119   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	W0318 07:52:51.030168   22374 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:51.030258   22374 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-529000 /bin/bash -c "sudo init 0"
	W0318 07:52:51.079556   22374 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-529000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:52:51.079599   22374 oci.go:650] error shutdown force-systemd-flag-529000: docker exec --privileged -t force-systemd-flag-529000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:52.082018   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:52.133583   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:52.133647   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:52.133661   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:52.133688   22374 retry.go:31] will retry after 342.562847ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:52.477508   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:52.527927   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:52.527988   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:52.528004   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:52.528029   22374 retry.go:31] will retry after 757.297705ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:53.285817   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:53.339583   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:53.339638   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:53.339650   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:53.339677   22374 retry.go:31] will retry after 977.144902ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:54.317897   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:54.369662   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:54.369714   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:54.369728   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:54.369755   22374 retry.go:31] will retry after 1.734624535s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:56.105275   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:56.159069   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:56.159126   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:56.159135   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:56.159163   22374 retry.go:31] will retry after 1.545389576s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:57.705081   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:57.758774   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:57.758828   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:57.758839   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:57.758864   22374 retry.go:31] will retry after 2.044756275s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:59.804878   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:52:59.858374   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:52:59.858425   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:52:59.858438   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:52:59.858466   22374 retry.go:31] will retry after 3.096426876s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:53:02.955545   22374 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529000 --format={{.State.Status}}
	W0318 07:53:03.007741   22374 cli_runner.go:211] docker container inspect force-systemd-flag-529000 --format={{.State.Status}} returned with exit code 1
	I0318 07:53:03.007793   22374 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:53:03.007805   22374 oci.go:664] temporary error: container force-systemd-flag-529000 status is  but expect it to be exited
	I0318 07:53:03.007836   22374 oci.go:88] couldn't shut down force-systemd-flag-529000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	 
	I0318 07:53:03.007911   22374 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-529000
	I0318 07:53:03.057788   22374 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-529000
	W0318 07:53:03.106644   22374 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-529000 returned with exit code 1
	I0318 07:53:03.106756   22374 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:53:03.157696   22374 cli_runner.go:164] Run: docker network rm force-systemd-flag-529000
	I0318 07:53:03.266059   22374 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:53:04.266707   22374 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:53:04.289673   22374 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:53:04.289792   22374 start.go:159] libmachine.API.Create for "force-systemd-flag-529000" (driver="docker")
	I0318 07:53:04.289838   22374 client.go:168] LocalClient.Create starting
	I0318 07:53:04.290030   22374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:53:04.290075   22374 main.go:141] libmachine: Decoding PEM data...
	I0318 07:53:04.290087   22374 main.go:141] libmachine: Parsing certificate...
	I0318 07:53:04.290133   22374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:53:04.290166   22374 main.go:141] libmachine: Decoding PEM data...
	I0318 07:53:04.290173   22374 main.go:141] libmachine: Parsing certificate...
	I0318 07:53:04.290531   22374 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:53:04.340284   22374 cli_runner.go:211] docker network inspect force-systemd-flag-529000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:53:04.340386   22374 network_create.go:281] running [docker network inspect force-systemd-flag-529000] to gather additional debugging logs...
	I0318 07:53:04.340402   22374 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529000
	W0318 07:53:04.389550   22374 cli_runner.go:211] docker network inspect force-systemd-flag-529000 returned with exit code 1
	I0318 07:53:04.389589   22374 network_create.go:284] error running [docker network inspect force-systemd-flag-529000]: docker network inspect force-systemd-flag-529000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-529000 not found
	I0318 07:53:04.389603   22374 network_create.go:286] output of [docker network inspect force-systemd-flag-529000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-529000 not found
	
	** /stderr **
	I0318 07:53:04.389731   22374 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:53:04.441277   22374 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:53:04.442682   22374 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:53:04.444242   22374 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:53:04.445865   22374 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:53:04.447456   22374 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:53:04.448860   22374 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:53:04.449209   22374 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023901c0}
	I0318 07:53:04.449222   22374 network_create.go:124] attempt to create docker network force-systemd-flag-529000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0318 07:53:04.449293   22374 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-529000 force-systemd-flag-529000
	I0318 07:53:04.535990   22374 network_create.go:108] docker network force-systemd-flag-529000 192.168.103.0/24 created
	I0318 07:53:04.536033   22374 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-flag-529000" container
	I0318 07:53:04.536132   22374 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:53:04.587908   22374 cli_runner.go:164] Run: docker volume create force-systemd-flag-529000 --label name.minikube.sigs.k8s.io=force-systemd-flag-529000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:53:04.636887   22374 oci.go:103] Successfully created a docker volume force-systemd-flag-529000
	I0318 07:53:04.637027   22374 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-529000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-529000 --entrypoint /usr/bin/test -v force-systemd-flag-529000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:53:04.923933   22374 oci.go:107] Successfully prepared a docker volume force-systemd-flag-529000
	I0318 07:53:04.923963   22374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:53:04.923976   22374 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:53:04.924093   22374 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-529000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:59:04.287778   22374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:59:04.287907   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:04.343241   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:04.343359   22374 retry.go:31] will retry after 182.911191ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:04.527464   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:04.580509   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:04.580633   22374 retry.go:31] will retry after 288.787541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:04.869872   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:04.921495   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:04.921612   22374 retry.go:31] will retry after 652.877348ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:05.574832   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:05.625338   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	W0318 07:59:05.625447   22374 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	
	W0318 07:59:05.625463   22374 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:05.625525   22374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:59:05.625587   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:05.674340   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:05.674438   22374 retry.go:31] will retry after 182.700746ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:05.859570   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:05.910714   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:05.910821   22374 retry.go:31] will retry after 379.658014ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:06.290769   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:06.340430   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:06.340539   22374 retry.go:31] will retry after 400.03493ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:06.742333   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:06.795861   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	W0318 07:59:06.795980   22374 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	
	W0318 07:59:06.795994   22374 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:06.796005   22374 start.go:128] duration metric: took 6m2.532686135s to createHost
	I0318 07:59:06.796067   22374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:59:06.796127   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:06.844955   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:06.845050   22374 retry.go:31] will retry after 186.833325ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:07.032662   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:07.085646   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:07.085749   22374 retry.go:31] will retry after 312.401926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:07.398712   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:07.449896   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:07.449988   22374 retry.go:31] will retry after 770.406578ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:08.222091   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:08.274980   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	W0318 07:59:08.275085   22374 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	
	W0318 07:59:08.275105   22374 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:08.275166   22374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:59:08.275228   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:08.324764   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:08.324856   22374 retry.go:31] will retry after 172.198499ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:08.498427   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:08.551001   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:08.551094   22374 retry.go:31] will retry after 288.298917ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:08.840639   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:08.893570   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	I0318 07:59:08.893669   22374 retry.go:31] will retry after 386.886539ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:09.282911   22374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000
	W0318 07:59:09.336098   22374 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000 returned with exit code 1
	W0318 07:59:09.336213   22374 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	
	W0318 07:59:09.336226   22374 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-529000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	I0318 07:59:09.336246   22374 fix.go:56] duration metric: took 6m18.628263105s for fixHost
	I0318 07:59:09.336254   22374 start.go:83] releasing machines lock for "force-systemd-flag-529000", held for 6m18.62830715s
	W0318 07:59:09.336335   22374 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-529000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-529000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 07:59:09.378783   22374 out.go:177] 
	W0318 07:59:09.400826   22374 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 07:59:09.400882   22374 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0318 07:59:09.400918   22374 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0318 07:59:09.422591   22374 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-529000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-529000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-529000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (202.079479ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-529000 host status: state: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-529000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-18 07:59:09.703415 -0700 PDT m=+6979.291675258
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-529000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-529000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-529000",
	        "Id": "3748504791c66b34e8b04532e26b3bdb436c3252b48268d5a25f573a64f86d1e",
	        "Created": "2024-03-18T14:53:04.496127633Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-529000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-529000 -n force-systemd-flag-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-529000 -n force-systemd-flag-529000: exit status 7 (115.034783ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:59:09.869600   22777 status.go:249] status error: host: state: unknown state "force-systemd-flag-529000": docker container inspect force-systemd-flag-529000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-529000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-529000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-529000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-529000
--- FAIL: TestForceSystemdFlag (748.69s)

                                                
                                    
x
+
TestForceSystemdEnv (754.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-793000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0318 07:36:06.158462   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:38:03.102367   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:38:47.316780   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:41:50.365841   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:43:03.100681   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:43:47.315537   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-793000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.202002734s)

                                                
                                                
-- stdout --
	* [force-systemd-env-793000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-793000" primary control-plane node in "force-systemd-env-793000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-793000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:35:02.238035   22063 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:35:02.238216   22063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:35:02.238221   22063 out.go:304] Setting ErrFile to fd 2...
	I0318 07:35:02.238224   22063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:35:02.238422   22063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:35:02.239887   22063 out.go:298] Setting JSON to false
	I0318 07:35:02.263175   22063 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":14675,"bootTime":1710757827,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 07:35:02.263254   22063 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 07:35:02.285761   22063 out.go:177] * [force-systemd-env-793000] minikube v1.32.0 on Darwin 14.3.1
	I0318 07:35:02.349509   22063 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 07:35:02.327447   22063 notify.go:220] Checking for updates...
	I0318 07:35:02.391284   22063 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 07:35:02.412396   22063 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 07:35:02.433491   22063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 07:35:02.454261   22063 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 07:35:02.475556   22063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0318 07:35:02.497013   22063 config.go:182] Loaded profile config "offline-docker-210000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:35:02.497119   22063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 07:35:02.553201   22063 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 07:35:02.553376   22063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:35:02.661889   22063 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:11 ContainersRunning:2 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:210 SystemTime:2024-03-18 14:35:02.651296456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:35:02.704181   22063 out.go:177] * Using the docker driver based on user configuration
	I0318 07:35:02.725205   22063 start.go:297] selected driver: docker
	I0318 07:35:02.725219   22063 start.go:901] validating driver "docker" against <nil>
	I0318 07:35:02.725230   22063 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 07:35:02.728360   22063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:35:02.854962   22063 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:11 ContainersRunning:2 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:210 SystemTime:2024-03-18 14:35:02.844496754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:35:02.855132   22063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 07:35:02.855361   22063 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 07:35:02.876275   22063 out.go:177] * Using Docker Desktop driver with root privileges
	I0318 07:35:02.897474   22063 cni.go:84] Creating CNI manager for ""
	I0318 07:35:02.897518   22063 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 07:35:02.897537   22063 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 07:35:02.897645   22063 start.go:340] cluster config:
	{Name:force-systemd-env-793000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:35:02.919335   22063 out.go:177] * Starting "force-systemd-env-793000" primary control-plane node in "force-systemd-env-793000" cluster
	I0318 07:35:02.961438   22063 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 07:35:02.983391   22063 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 07:35:03.025424   22063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:35:03.025484   22063 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 07:35:03.025503   22063 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 07:35:03.025522   22063 cache.go:56] Caching tarball of preloaded images
	I0318 07:35:03.025733   22063 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 07:35:03.025752   22063 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 07:35:03.025901   22063 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/force-systemd-env-793000/config.json ...
	I0318 07:35:03.026643   22063 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/force-systemd-env-793000/config.json: {Name:mk6edf469a6b93efabbd09c4c0b913c07eb0fbec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 07:35:03.076590   22063 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 07:35:03.076626   22063 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 07:35:03.076647   22063 cache.go:194] Successfully downloaded all kic artifacts
	I0318 07:35:03.076703   22063 start.go:360] acquireMachinesLock for force-systemd-env-793000: {Name:mk33c7eab355047f1aab6814a5e72fa7a7a8ea35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:35:03.076864   22063 start.go:364] duration metric: took 149.74µs to acquireMachinesLock for "force-systemd-env-793000"
	I0318 07:35:03.076890   22063 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-793000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-793000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 07:35:03.076964   22063 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:35:03.120110   22063 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:35:03.120483   22063 start.go:159] libmachine.API.Create for "force-systemd-env-793000" (driver="docker")
	I0318 07:35:03.120541   22063 client.go:168] LocalClient.Create starting
	I0318 07:35:03.120761   22063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:35:03.120851   22063 main.go:141] libmachine: Decoding PEM data...
	I0318 07:35:03.120884   22063 main.go:141] libmachine: Parsing certificate...
	I0318 07:35:03.120992   22063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:35:03.121061   22063 main.go:141] libmachine: Decoding PEM data...
	I0318 07:35:03.121077   22063 main.go:141] libmachine: Parsing certificate...
	I0318 07:35:03.122049   22063 cli_runner.go:164] Run: docker network inspect force-systemd-env-793000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:35:03.172858   22063 cli_runner.go:211] docker network inspect force-systemd-env-793000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:35:03.172958   22063 network_create.go:281] running [docker network inspect force-systemd-env-793000] to gather additional debugging logs...
	I0318 07:35:03.172981   22063 cli_runner.go:164] Run: docker network inspect force-systemd-env-793000
	W0318 07:35:03.222505   22063 cli_runner.go:211] docker network inspect force-systemd-env-793000 returned with exit code 1
	I0318 07:35:03.222536   22063 network_create.go:284] error running [docker network inspect force-systemd-env-793000]: docker network inspect force-systemd-env-793000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-793000 not found
	I0318 07:35:03.222555   22063 network_create.go:286] output of [docker network inspect force-systemd-env-793000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-793000 not found
	
	** /stderr **
	I0318 07:35:03.222697   22063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:35:03.274340   22063 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:35:03.275997   22063 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:35:03.277601   22063 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:35:03.279011   22063 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:35:03.279380   22063 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00221d3a0}
	I0318 07:35:03.279396   22063 network_create.go:124] attempt to create docker network force-systemd-env-793000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0318 07:35:03.279467   22063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-793000 force-systemd-env-793000
	W0318 07:35:03.332231   22063 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-793000 force-systemd-env-793000 returned with exit code 1
	W0318 07:35:03.332272   22063 network_create.go:149] failed to create docker network force-systemd-env-793000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-793000 force-systemd-env-793000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0318 07:35:03.332291   22063 network_create.go:116] failed to create docker network force-systemd-env-793000 192.168.85.0/24, will retry: subnet is taken
	I0318 07:35:03.333681   22063 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:35:03.334038   22063 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00226c3c0}
	I0318 07:35:03.334056   22063 network_create.go:124] attempt to create docker network force-systemd-env-793000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0318 07:35:03.334125   22063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-793000 force-systemd-env-793000
	I0318 07:35:03.420419   22063 network_create.go:108] docker network force-systemd-env-793000 192.168.94.0/24 created
	I0318 07:35:03.420467   22063 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-793000" container
	I0318 07:35:03.420573   22063 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:35:03.471818   22063 cli_runner.go:164] Run: docker volume create force-systemd-env-793000 --label name.minikube.sigs.k8s.io=force-systemd-env-793000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:35:03.522711   22063 oci.go:103] Successfully created a docker volume force-systemd-env-793000
	I0318 07:35:03.522856   22063 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-793000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-793000 --entrypoint /usr/bin/test -v force-systemd-env-793000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:35:03.907784   22063 oci.go:107] Successfully prepared a docker volume force-systemd-env-793000
	I0318 07:35:03.907826   22063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:35:03.907839   22063 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:35:03.907940   22063 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-793000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:41:03.162807   22063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:41:03.162942   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:03.216262   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:03.216396   22063 retry.go:31] will retry after 240.72232ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:03.458382   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:03.511628   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:03.511728   22063 retry.go:31] will retry after 339.230526ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:03.853122   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:03.904504   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:03.904614   22063 retry.go:31] will retry after 318.37102ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:04.223576   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:04.280476   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	W0318 07:41:04.280577   22063 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	
	W0318 07:41:04.280597   22063 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:04.280657   22063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:41:04.280723   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:04.332463   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:04.332552   22063 retry.go:31] will retry after 294.95694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:04.628871   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:04.680795   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:04.680888   22063 retry.go:31] will retry after 252.506596ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:04.935775   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:04.988113   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:04.988216   22063 retry.go:31] will retry after 300.225471ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:05.290829   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:05.344435   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:41:05.344529   22063 retry.go:31] will retry after 734.951828ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:06.081783   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:41:06.134546   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	W0318 07:41:06.134645   22063 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	
	W0318 07:41:06.134662   22063 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:06.134687   22063 start.go:128] duration metric: took 6m3.016399835s to createHost
	I0318 07:41:06.134695   22063 start.go:83] releasing machines lock for "force-systemd-env-793000", held for 6m3.016511373s
	W0318 07:41:06.134710   22063 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0318 07:41:06.135123   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:06.185215   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:06.185274   22063 delete.go:82] Unable to get host status for force-systemd-env-793000, assuming it has already been deleted: state: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	W0318 07:41:06.185373   22063 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0318 07:41:06.185382   22063 start.go:728] Will try again in 5 seconds ...
	I0318 07:41:11.185750   22063 start.go:360] acquireMachinesLock for force-systemd-env-793000: {Name:mk33c7eab355047f1aab6814a5e72fa7a7a8ea35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:41:11.185956   22063 start.go:364] duration metric: took 162.872µs to acquireMachinesLock for "force-systemd-env-793000"
	I0318 07:41:11.185996   22063 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:41:11.186011   22063 fix.go:54] fixHost starting: 
	I0318 07:41:11.186447   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:11.240494   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:11.240541   22063 fix.go:112] recreateIfNeeded on force-systemd-env-793000: state= err=unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:11.240561   22063 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:41:11.262275   22063 out.go:177] * docker "force-systemd-env-793000" container is missing, will recreate.
	I0318 07:41:11.305966   22063 delete.go:124] DEMOLISHING force-systemd-env-793000 ...
	I0318 07:41:11.306129   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:11.356266   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	W0318 07:41:11.356321   22063 stop.go:83] unable to get state: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:11.356340   22063 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:11.356692   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:11.405211   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:11.405271   22063 delete.go:82] Unable to get host status for force-systemd-env-793000, assuming it has already been deleted: state: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:11.405360   22063 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-793000
	W0318 07:41:11.454375   22063 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-793000 returned with exit code 1
	I0318 07:41:11.454416   22063 kic.go:371] could not find the container force-systemd-env-793000 to remove it. will try anyways
	I0318 07:41:11.454492   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:11.503595   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	W0318 07:41:11.503645   22063 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:11.503724   22063 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-793000 /bin/bash -c "sudo init 0"
	W0318 07:41:11.552869   22063 cli_runner.go:211] docker exec --privileged -t force-systemd-env-793000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:41:11.552903   22063 oci.go:650] error shutdown force-systemd-env-793000: docker exec --privileged -t force-systemd-env-793000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:12.553339   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:12.605828   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:12.605883   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:12.605898   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:12.605930   22063 retry.go:31] will retry after 654.774936ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:13.263079   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:13.316471   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:13.316528   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:13.316546   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:13.316571   22063 retry.go:31] will retry after 640.862156ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:13.957602   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:14.009780   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:14.009837   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:14.009848   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:14.009871   22063 retry.go:31] will retry after 607.696991ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:14.620001   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:14.672429   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:14.672480   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:14.672493   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:14.672517   22063 retry.go:31] will retry after 1.025462281s: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:15.698391   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:15.749160   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:15.749223   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:15.749237   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:15.749264   22063 retry.go:31] will retry after 2.500666498s: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:18.251187   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:18.304461   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:18.304510   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:18.304520   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:18.304546   22063 retry.go:31] will retry after 4.809547183s: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:23.115666   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:23.168011   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:23.168068   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:23.168081   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:23.168108   22063 retry.go:31] will retry after 5.198997775s: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:28.368126   22063 cli_runner.go:164] Run: docker container inspect force-systemd-env-793000 --format={{.State.Status}}
	W0318 07:41:28.419007   22063 cli_runner.go:211] docker container inspect force-systemd-env-793000 --format={{.State.Status}} returned with exit code 1
	I0318 07:41:28.419058   22063 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:41:28.419073   22063 oci.go:664] temporary error: container force-systemd-env-793000 status is  but expect it to be exited
	I0318 07:41:28.419105   22063 oci.go:88] couldn't shut down force-systemd-env-793000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	 
	I0318 07:41:28.419186   22063 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-793000
	I0318 07:41:28.468838   22063 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-793000
	W0318 07:41:28.517529   22063 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-793000 returned with exit code 1
	I0318 07:41:28.517648   22063 cli_runner.go:164] Run: docker network inspect force-systemd-env-793000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:41:28.567346   22063 cli_runner.go:164] Run: docker network rm force-systemd-env-793000
	I0318 07:41:28.695550   22063 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:41:29.697713   22063 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:41:29.718643   22063 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0318 07:41:29.718838   22063 start.go:159] libmachine.API.Create for "force-systemd-env-793000" (driver="docker")
	I0318 07:41:29.718864   22063 client.go:168] LocalClient.Create starting
	I0318 07:41:29.719067   22063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:41:29.719163   22063 main.go:141] libmachine: Decoding PEM data...
	I0318 07:41:29.719188   22063 main.go:141] libmachine: Parsing certificate...
	I0318 07:41:29.719280   22063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:41:29.719349   22063 main.go:141] libmachine: Decoding PEM data...
	I0318 07:41:29.719364   22063 main.go:141] libmachine: Parsing certificate...
	I0318 07:41:29.720168   22063 cli_runner.go:164] Run: docker network inspect force-systemd-env-793000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:41:29.772238   22063 cli_runner.go:211] docker network inspect force-systemd-env-793000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:41:29.772324   22063 network_create.go:281] running [docker network inspect force-systemd-env-793000] to gather additional debugging logs...
	I0318 07:41:29.772347   22063 cli_runner.go:164] Run: docker network inspect force-systemd-env-793000
	W0318 07:41:29.822168   22063 cli_runner.go:211] docker network inspect force-systemd-env-793000 returned with exit code 1
	I0318 07:41:29.822202   22063 network_create.go:284] error running [docker network inspect force-systemd-env-793000]: docker network inspect force-systemd-env-793000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-793000 not found
	I0318 07:41:29.822220   22063 network_create.go:286] output of [docker network inspect force-systemd-env-793000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-793000 not found
	
	** /stderr **
	I0318 07:41:29.822372   22063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:41:29.873518   22063 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.874990   22063 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.876591   22063 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.878318   22063 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.879850   22063 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.881396   22063 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.882963   22063 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:41:29.883371   22063 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00235d550}
	I0318 07:41:29.883385   22063 network_create.go:124] attempt to create docker network force-systemd-env-793000 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 65535 ...
	I0318 07:41:29.883452   22063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-793000 force-systemd-env-793000
	I0318 07:41:29.970044   22063 network_create.go:108] docker network force-systemd-env-793000 192.168.112.0/24 created
	I0318 07:41:29.970099   22063 kic.go:121] calculated static IP "192.168.112.2" for the "force-systemd-env-793000" container
	I0318 07:41:29.970223   22063 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:41:30.021153   22063 cli_runner.go:164] Run: docker volume create force-systemd-env-793000 --label name.minikube.sigs.k8s.io=force-systemd-env-793000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:41:30.070267   22063 oci.go:103] Successfully created a docker volume force-systemd-env-793000
	I0318 07:41:30.070380   22063 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-793000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-793000 --entrypoint /usr/bin/test -v force-systemd-env-793000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:41:30.380161   22063 oci.go:107] Successfully prepared a docker volume force-systemd-env-793000
	I0318 07:41:30.380215   22063 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:41:30.380231   22063 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:41:30.380339   22063 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-793000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:47:29.716775   22063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:47:29.716902   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:29.769358   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:29.769472   22063 retry.go:31] will retry after 219.616695ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:29.990944   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:30.043983   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:30.044105   22063 retry.go:31] will retry after 472.695039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:30.517290   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:30.571291   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:30.571390   22063 retry.go:31] will retry after 674.934099ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:31.247184   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:31.299240   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	W0318 07:47:31.299356   22063 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	
	W0318 07:47:31.299372   22063 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:31.299430   22063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:47:31.299500   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:31.349205   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:31.349299   22063 retry.go:31] will retry after 350.414004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:31.701070   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:31.753633   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:31.753736   22063 retry.go:31] will retry after 234.759335ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:31.989349   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:32.042664   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:32.042763   22063 retry.go:31] will retry after 408.915114ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:32.452159   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:32.506514   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	W0318 07:47:32.506622   22063 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	
	W0318 07:47:32.506638   22063 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:32.506654   22063 start.go:128] duration metric: took 6m2.812308257s to createHost
	I0318 07:47:32.506730   22063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:47:32.506788   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:32.557584   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:32.557683   22063 retry.go:31] will retry after 288.985514ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:32.849041   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:32.899710   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:32.899809   22063 retry.go:31] will retry after 272.01124ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:33.174008   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:33.227570   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:33.227663   22063 retry.go:31] will retry after 336.675153ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:33.565033   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:33.618757   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	W0318 07:47:33.618863   22063 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	
	W0318 07:47:33.618881   22063 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:33.618937   22063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:47:33.618996   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:33.669563   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:33.669656   22063 retry.go:31] will retry after 265.889445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:33.936302   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:33.986728   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:33.986820   22063 retry.go:31] will retry after 513.696961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:34.502988   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:34.555392   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	I0318 07:47:34.555491   22063 retry.go:31] will retry after 648.347892ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:35.206300   22063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000
	W0318 07:47:35.259043   22063 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000 returned with exit code 1
	W0318 07:47:35.259140   22063 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	
	W0318 07:47:35.259155   22063 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-793000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-793000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	I0318 07:47:35.259167   22063 fix.go:56] duration metric: took 6m24.076770606s for fixHost
	I0318 07:47:35.259175   22063 start.go:83] releasing machines lock for "force-systemd-env-793000", held for 6m24.076820135s
	W0318 07:47:35.259251   22063 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-793000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-793000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 07:47:35.301779   22063 out.go:177] 
	W0318 07:47:35.323741   22063 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 07:47:35.323801   22063 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0318 07:47:35.323832   22063 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0318 07:47:35.345906   22063 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-793000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-793000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-793000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (204.361174ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-793000 host status: state: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-793000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-18 07:47:35.626971 -0700 PDT m=+6285.249949568
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-793000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-793000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-793000",
	        "Id": "8112a00f54c0f1b769068f3faa4f5f17598aea7623c58b1a8121a3712dfbbb98",
	        "Created": "2024-03-18T14:41:29.930348721Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.112.0/24",
	                    "Gateway": "192.168.112.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-793000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-793000 -n force-systemd-env-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-793000 -n force-systemd-env-793000: exit status 7 (113.022103ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:47:35.791441   22467 status.go:249] status error: host: state: unknown state "force-systemd-env-793000": docker container inspect force-systemd-env-793000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-793000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-793000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-793000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-793000
--- FAIL: TestForceSystemdEnv (754.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (883.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-789000 ssh -- ls /minikube-host
E0318 06:33:03.013952   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:33:47.229054   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:35:10.275760   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:38:03.042834   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:38:47.256672   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:43:03.046120   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:43:47.260733   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-789000 ssh -- ls /minikube-host: signal: killed (14m43.02767124s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-789000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-789000
helpers_test.go:235: (dbg) docker inspect mount-start-2-789000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac444d3e28f8712eb6087ccec5d030ccf7c7d1b8fa8c2d41a7b65ec4a49eb44",
	        "Created": "2024-03-18T13:30:50.964877468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-18T13:30:51.187991579Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:824841ec881aeec3697aa896b6eaaaed4a34726d2ba99ff4b9ca0b12f150022e",
	        "ResolvConfPath": "/var/lib/docker/containers/cac444d3e28f8712eb6087ccec5d030ccf7c7d1b8fa8c2d41a7b65ec4a49eb44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac444d3e28f8712eb6087ccec5d030ccf7c7d1b8fa8c2d41a7b65ec4a49eb44/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac444d3e28f8712eb6087ccec5d030ccf7c7d1b8fa8c2d41a7b65ec4a49eb44/hosts",
	        "LogPath": "/var/lib/docker/containers/cac444d3e28f8712eb6087ccec5d030ccf7c7d1b8fa8c2d41a7b65ec4a49eb44/cac444d3e28f8712eb6087ccec5d030ccf7c7d1b8fa8c2d41a7b65ec4a49eb44-json.log",
	        "Name": "/mount-start-2-789000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-789000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-789000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65bbce39174ea6aaf16f2bf8d999cf483b6297db5ccb7d16b440ecca7a6eaad0-init/diff:/var/lib/docker/overlay2/a0e448129c0daf7d9b141857b90b7870d0a196fc722c9128f6db6c5c4106bb49/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65bbce39174ea6aaf16f2bf8d999cf483b6297db5ccb7d16b440ecca7a6eaad0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65bbce39174ea6aaf16f2bf8d999cf483b6297db5ccb7d16b440ecca7a6eaad0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65bbce39174ea6aaf16f2bf8d999cf483b6297db5ccb7d16b440ecca7a6eaad0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-789000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-789000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-789000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-789000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-789000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f52ad47b0fc7104a4ced41821de702550533f25d1eda51eb243312e78aeaa834",
	            "SandboxKey": "/var/run/docker/netns/f52ad47b0fc7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55886"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55887"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55889"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55890"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-789000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cac444d3e28f",
	                        "mount-start-2-789000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "9642234a9ceb4c2f02430884b8cb173d3e8d583743abc88e713f93ffc6d1f590",
	                    "EndpointID": "32994d9a3c6eccc7a82735ccf5f81819e10d11333cafdd02cc6aeb6bbe701f58",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-789000",
	                        "cac444d3e28f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-789000 -n mount-start-2-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-789000 -n mount-start-2-789000: exit status 6 (402.326037ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 06:45:40.798104   19901 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-789000" does not appear in /Users/jenkins/minikube-integration/18429-11233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-789000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (883.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (752.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-242000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0318 06:48:03.048079   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:48:47.263355   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:51:50.312526   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:53:03.052458   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:53:47.268326   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:58:03.028074   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:58:47.242973   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-242000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m32.577623162s)

                                                
                                                
-- stdout --
	* [multinode-242000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-242000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 06:46:51.585593   20038 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:46:51.586321   20038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:46:51.586330   20038 out.go:304] Setting ErrFile to fd 2...
	I0318 06:46:51.586336   20038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:46:51.586755   20038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:46:51.588502   20038 out.go:298] Setting JSON to false
	I0318 06:46:51.611226   20038 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11784,"bootTime":1710757827,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 06:46:51.611335   20038 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 06:46:51.632964   20038 out.go:177] * [multinode-242000] minikube v1.32.0 on Darwin 14.3.1
	I0318 06:46:51.675810   20038 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 06:46:51.675831   20038 notify.go:220] Checking for updates...
	I0318 06:46:51.718504   20038 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 06:46:51.739637   20038 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 06:46:51.760728   20038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 06:46:51.781667   20038 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 06:46:51.802716   20038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 06:46:51.823962   20038 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 06:46:51.879493   20038 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 06:46:51.879649   20038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:46:51.977498   20038 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:false NGoroutines:120 SystemTime:2024-03-18 13:46:51.967529267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:46:51.999172   20038 out.go:177] * Using the docker driver based on user configuration
	I0318 06:46:52.041244   20038 start.go:297] selected driver: docker
	I0318 06:46:52.041269   20038 start.go:901] validating driver "docker" against <nil>
	I0318 06:46:52.041286   20038 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 06:46:52.045695   20038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:46:52.144094   20038 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:false NGoroutines:120 SystemTime:2024-03-18 13:46:52.134497052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:46:52.144300   20038 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 06:46:52.144471   20038 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 06:46:52.165105   20038 out.go:177] * Using Docker Desktop driver with root privileges
	I0318 06:46:52.186335   20038 cni.go:84] Creating CNI manager for ""
	I0318 06:46:52.186367   20038 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 06:46:52.186383   20038 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 06:46:52.186481   20038 start.go:340] cluster config:
	{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 06:46:52.208404   20038 out.go:177] * Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	I0318 06:46:52.251186   20038 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 06:46:52.273198   20038 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 06:46:52.315445   20038 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 06:46:52.315515   20038 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 06:46:52.315528   20038 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 06:46:52.315552   20038 cache.go:56] Caching tarball of preloaded images
	I0318 06:46:52.315790   20038 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 06:46:52.315822   20038 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 06:46:52.317325   20038 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/multinode-242000/config.json ...
	I0318 06:46:52.317410   20038 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/multinode-242000/config.json: {Name:mk6cc8bf350df69d03ab4ad458df923b27bb8a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 06:46:52.366439   20038 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 06:46:52.366478   20038 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 06:46:52.366503   20038 cache.go:194] Successfully downloaded all kic artifacts
	I0318 06:46:52.366549   20038 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkba9fe2419e9cf6c0347d7f2eb6e7a616348974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 06:46:52.366692   20038 start.go:364] duration metric: took 131.839µs to acquireMachinesLock for "multinode-242000"
	I0318 06:46:52.366725   20038 start.go:93] Provisioning new machine with config: &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 06:46:52.366783   20038 start.go:125] createHost starting for "" (driver="docker")
	I0318 06:46:52.410130   20038 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0318 06:46:52.410536   20038 start.go:159] libmachine.API.Create for "multinode-242000" (driver="docker")
	I0318 06:46:52.410587   20038 client.go:168] LocalClient.Create starting
	I0318 06:46:52.410763   20038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 06:46:52.410853   20038 main.go:141] libmachine: Decoding PEM data...
	I0318 06:46:52.410906   20038 main.go:141] libmachine: Parsing certificate...
	I0318 06:46:52.411006   20038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 06:46:52.411076   20038 main.go:141] libmachine: Decoding PEM data...
	I0318 06:46:52.411105   20038 main.go:141] libmachine: Parsing certificate...
	I0318 06:46:52.412087   20038 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 06:46:52.464475   20038 cli_runner.go:211] docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 06:46:52.464572   20038 network_create.go:281] running [docker network inspect multinode-242000] to gather additional debugging logs...
	I0318 06:46:52.464592   20038 cli_runner.go:164] Run: docker network inspect multinode-242000
	W0318 06:46:52.513529   20038 cli_runner.go:211] docker network inspect multinode-242000 returned with exit code 1
	I0318 06:46:52.513557   20038 network_create.go:284] error running [docker network inspect multinode-242000]: docker network inspect multinode-242000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242000 not found
	I0318 06:46:52.513568   20038 network_create.go:286] output of [docker network inspect multinode-242000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242000 not found
	
	** /stderr **
	I0318 06:46:52.513705   20038 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 06:46:52.564671   20038 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:46:52.566301   20038 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:46:52.567879   20038 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:46:52.568273   20038 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002296ed0}
	I0318 06:46:52.568289   20038 network_create.go:124] attempt to create docker network multinode-242000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0318 06:46:52.568363   20038 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	I0318 06:46:52.654656   20038 network_create.go:108] docker network multinode-242000 192.168.76.0/24 created
	I0318 06:46:52.654693   20038 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-242000" container
	I0318 06:46:52.654820   20038 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 06:46:52.704584   20038 cli_runner.go:164] Run: docker volume create multinode-242000 --label name.minikube.sigs.k8s.io=multinode-242000 --label created_by.minikube.sigs.k8s.io=true
	I0318 06:46:52.755103   20038 oci.go:103] Successfully created a docker volume multinode-242000
	I0318 06:46:52.755238   20038 cli_runner.go:164] Run: docker run --rm --name multinode-242000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242000 --entrypoint /usr/bin/test -v multinode-242000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 06:46:53.120441   20038 oci.go:107] Successfully prepared a docker volume multinode-242000
	I0318 06:46:53.120483   20038 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 06:46:53.120497   20038 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 06:46:53.120586   20038 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 06:52:52.415142   20038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 06:52:52.415300   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:52.468551   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:52.468681   20038 retry.go:31] will retry after 161.98781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:52.631624   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:52.682127   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:52.682245   20038 retry.go:31] will retry after 285.936907ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:52.969742   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:53.020652   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:53.020745   20038 retry.go:31] will retry after 537.584724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:53.558563   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:53.612626   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:53.612718   20038 retry.go:31] will retry after 435.530507ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:54.050702   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:54.106868   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 06:52:54.106972   20038 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 06:52:54.106989   20038 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:54.107057   20038 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 06:52:54.107110   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:54.156241   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:54.156343   20038 retry.go:31] will retry after 131.35194ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:54.288304   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:54.338839   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:54.338945   20038 retry.go:31] will retry after 423.591622ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:54.763440   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:54.814394   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:52:54.814499   20038 retry.go:31] will retry after 650.113471ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:55.464901   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:52:55.517004   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 06:52:55.517109   20038 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 06:52:55.517132   20038 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:52:55.517144   20038 start.go:128] duration metric: took 6m3.146488043s to createHost
	I0318 06:52:55.517150   20038 start.go:83] releasing machines lock for "multinode-242000", held for 6m3.146587737s
	W0318 06:52:55.517170   20038 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0318 06:52:55.517586   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:52:55.567239   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:52:55.567292   20038 delete.go:82] Unable to get host status for multinode-242000, assuming it has already been deleted: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	W0318 06:52:55.567366   20038 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0318 06:52:55.567374   20038 start.go:728] Will try again in 5 seconds ...
	I0318 06:53:00.569167   20038 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkba9fe2419e9cf6c0347d7f2eb6e7a616348974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 06:53:00.569516   20038 start.go:364] duration metric: took 278.525µs to acquireMachinesLock for "multinode-242000"
	I0318 06:53:00.569565   20038 start.go:96] Skipping create...Using existing machine configuration
	I0318 06:53:00.569580   20038 fix.go:54] fixHost starting: 
	I0318 06:53:00.570056   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:00.621538   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:00.621581   20038 fix.go:112] recreateIfNeeded on multinode-242000: state= err=unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:00.621597   20038 fix.go:117] machineExists: false. err=machine does not exist
	I0318 06:53:00.643352   20038 out.go:177] * docker "multinode-242000" container is missing, will recreate.
	I0318 06:53:00.686180   20038 delete.go:124] DEMOLISHING multinode-242000 ...
	I0318 06:53:00.686375   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:00.737059   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 06:53:00.737112   20038 stop.go:83] unable to get state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:00.737134   20038 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:00.737520   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:00.786583   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:00.786645   20038 delete.go:82] Unable to get host status for multinode-242000, assuming it has already been deleted: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:00.786733   20038 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 06:53:00.835469   20038 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 06:53:00.835505   20038 kic.go:371] could not find the container multinode-242000 to remove it. will try anyways
	I0318 06:53:00.835575   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:00.885067   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 06:53:00.885111   20038 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:00.885212   20038 cli_runner.go:164] Run: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0"
	W0318 06:53:00.934410   20038 cli_runner.go:211] docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 06:53:00.934459   20038 oci.go:650] error shutdown multinode-242000: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:01.936277   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:01.989879   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:01.989925   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:01.989941   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:01.989971   20038 retry.go:31] will retry after 278.365321ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:02.270065   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:02.320616   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:02.320660   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:02.320670   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:02.320694   20038 retry.go:31] will retry after 562.420016ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:02.885461   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:02.935890   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:02.935955   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:02.935969   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:02.935992   20038 retry.go:31] will retry after 1.63086729s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:04.567872   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:04.620643   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:04.620692   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:04.620706   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:04.620732   20038 retry.go:31] will retry after 1.199621686s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:05.820658   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:05.872768   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:05.872822   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:05.872832   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:05.872856   20038 retry.go:31] will retry after 3.089040113s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:08.964289   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:09.017955   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:09.018001   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:09.018013   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:09.018038   20038 retry.go:31] will retry after 3.381484645s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:12.400805   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:12.452476   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:12.452540   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:12.452553   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:12.452578   20038 retry.go:31] will retry after 3.935079236s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:16.388662   20038 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 06:53:16.440655   20038 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 06:53:16.440703   20038 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:53:16.440713   20038 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 06:53:16.440742   20038 oci.go:88] couldn't shut down multinode-242000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	 
	I0318 06:53:16.440827   20038 cli_runner.go:164] Run: docker rm -f -v multinode-242000
	I0318 06:53:16.490502   20038 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 06:53:16.539871   20038 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 06:53:16.539980   20038 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 06:53:16.589412   20038 cli_runner.go:164] Run: docker network rm multinode-242000
	I0318 06:53:16.699241   20038 fix.go:124] Sleeping 1 second for extra luck!
	I0318 06:53:17.701155   20038 start.go:125] createHost starting for "" (driver="docker")
	I0318 06:53:17.723355   20038 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0318 06:53:17.723600   20038 start.go:159] libmachine.API.Create for "multinode-242000" (driver="docker")
	I0318 06:53:17.723644   20038 client.go:168] LocalClient.Create starting
	I0318 06:53:17.723884   20038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 06:53:17.723981   20038 main.go:141] libmachine: Decoding PEM data...
	I0318 06:53:17.724005   20038 main.go:141] libmachine: Parsing certificate...
	I0318 06:53:17.724083   20038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 06:53:17.724150   20038 main.go:141] libmachine: Decoding PEM data...
	I0318 06:53:17.724166   20038 main.go:141] libmachine: Parsing certificate...
	I0318 06:53:17.724865   20038 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 06:53:17.837393   20038 cli_runner.go:211] docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 06:53:17.837524   20038 network_create.go:281] running [docker network inspect multinode-242000] to gather additional debugging logs...
	I0318 06:53:17.837542   20038 cli_runner.go:164] Run: docker network inspect multinode-242000
	W0318 06:53:17.887490   20038 cli_runner.go:211] docker network inspect multinode-242000 returned with exit code 1
	I0318 06:53:17.887515   20038 network_create.go:284] error running [docker network inspect multinode-242000]: docker network inspect multinode-242000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242000 not found
	I0318 06:53:17.887527   20038 network_create.go:286] output of [docker network inspect multinode-242000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242000 not found
	
	** /stderr **
	I0318 06:53:17.887647   20038 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 06:53:17.939314   20038 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:53:17.941005   20038 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:53:17.942509   20038 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:53:17.944038   20038 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:53:17.944492   20038 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002297bd0}
	I0318 06:53:17.944512   20038 network_create.go:124] attempt to create docker network multinode-242000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0318 06:53:17.944600   20038 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	W0318 06:53:17.994751   20038 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000 returned with exit code 1
	W0318 06:53:17.994793   20038 network_create.go:149] failed to create docker network multinode-242000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0318 06:53:17.994816   20038 network_create.go:116] failed to create docker network multinode-242000 192.168.85.0/24, will retry: subnet is taken
	I0318 06:53:17.996186   20038 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 06:53:17.996634   20038 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00231d6f0}
	I0318 06:53:17.996653   20038 network_create.go:124] attempt to create docker network multinode-242000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0318 06:53:17.996722   20038 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	I0318 06:53:18.083226   20038 network_create.go:108] docker network multinode-242000 192.168.94.0/24 created
	I0318 06:53:18.083271   20038 kic.go:121] calculated static IP "192.168.94.2" for the "multinode-242000" container
	I0318 06:53:18.083378   20038 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 06:53:18.133175   20038 cli_runner.go:164] Run: docker volume create multinode-242000 --label name.minikube.sigs.k8s.io=multinode-242000 --label created_by.minikube.sigs.k8s.io=true
	I0318 06:53:18.182395   20038 oci.go:103] Successfully created a docker volume multinode-242000
	I0318 06:53:18.182518   20038 cli_runner.go:164] Run: docker run --rm --name multinode-242000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242000 --entrypoint /usr/bin/test -v multinode-242000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 06:53:18.462737   20038 oci.go:107] Successfully prepared a docker volume multinode-242000
	I0318 06:53:18.462783   20038 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 06:53:18.462796   20038 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 06:53:18.462894   20038 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 06:59:17.700716   20038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 06:59:17.700848   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:17.815978   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:17.816069   20038 retry.go:31] will retry after 368.05069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:18.186484   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:18.239303   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:18.239414   20038 retry.go:31] will retry after 509.297906ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:18.749515   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:18.802215   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:18.802337   20038 retry.go:31] will retry after 383.468753ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:19.187893   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:19.238562   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 06:59:19.238676   20038 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 06:59:19.238695   20038 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:19.238754   20038 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 06:59:19.238811   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:19.288048   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:19.288153   20038 retry.go:31] will retry after 372.840658ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:19.663325   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:19.715037   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:19.715147   20038 retry.go:31] will retry after 449.722947ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:20.167236   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:20.220096   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:20.220198   20038 retry.go:31] will retry after 567.544046ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:20.790143   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:20.841167   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 06:59:20.841272   20038 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 06:59:20.841291   20038 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:20.841301   20038 start.go:128] duration metric: took 6m3.164019952s to createHost
	I0318 06:59:20.841366   20038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 06:59:20.841429   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:20.890651   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:20.890755   20038 retry.go:31] will retry after 133.105589ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:21.024450   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:21.077391   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:21.077487   20038 retry.go:31] will retry after 225.558997ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:21.305465   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:21.358812   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:21.358909   20038 retry.go:31] will retry after 804.704061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:22.166042   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:22.218548   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 06:59:22.218653   20038 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 06:59:22.218679   20038 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:22.218738   20038 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 06:59:22.218795   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:22.267836   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:22.267929   20038 retry.go:31] will retry after 134.62642ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:22.403410   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:22.455572   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:22.455662   20038 retry.go:31] will retry after 520.961621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:22.979023   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:23.032829   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 06:59:23.032940   20038 retry.go:31] will retry after 825.628139ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:23.860236   20038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 06:59:23.911160   20038 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 06:59:23.911262   20038 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 06:59:23.911277   20038 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 06:59:23.911292   20038 fix.go:56] duration metric: took 6m23.365554808s for fixHost
	I0318 06:59:23.911310   20038 start.go:83] releasing machines lock for "multinode-242000", held for 6m23.365615412s
	W0318 06:59:23.911396   20038 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-242000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-242000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 06:59:23.953904   20038 out.go:177] 
	W0318 06:59:23.976133   20038 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 06:59:23.976191   20038 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0318 06:59:23.976231   20038 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0318 06:59:23.997720   20038 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-242000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (113.380307ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 06:59:24.259123   20427 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (752.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (101.349447ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-242000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- rollout status deployment/busybox: exit status 1 (100.285288ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.959318ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.094512ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.849462ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.457717ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.377581ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.431465ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.730432ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.383921ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.450642ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.867676ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.848234ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.706603ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.io: exit status 1 (99.735262ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default: exit status 1 (100.004519ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (100.712604ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (114.86615ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:20.978503   20531 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.893763ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (114.617172ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:21.246911   20540 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-242000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-242000 -v 3 --alsologtostderr: exit status 80 (200.016421ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:21.310626   20544 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:21.310813   20544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:21.310819   20544 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:21.310823   20544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:21.311006   20544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:21.311341   20544 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:21.311617   20544 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:21.312002   20544 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:21.360467   20544 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:21.381963   20544 out.go:177] 
	W0318 07:01:21.403439   20544 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-242000 host status: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-242000 host status: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	I0318 07:01:21.424645   20544 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-242000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (114.699989ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:21.615156   20550 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-242000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-242000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (38.568335ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-242000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-242000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-242000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (115.094708ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:21.822519   20557 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-242000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-789000\",\"Status\":\"\",\"Config\":null,\"Active\":false},{\"Name\":\"functional-510000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-242000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-242000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuUR
I\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-242000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},
\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"
SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (115.776261ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:22.180524   20569 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status --output json --alsologtostderr: exit status 7 (116.595841ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-242000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:22.243731   20573 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:22.244003   20573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:22.244009   20573 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:22.244013   20573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:22.244192   20573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:22.244375   20573 out.go:298] Setting JSON to true
	I0318 07:01:22.244395   20573 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:22.244453   20573 notify.go:220] Checking for updates...
	I0318 07:01:22.244655   20573 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:22.244671   20573 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:22.246044   20573 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:22.297194   20573 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:22.297266   20573 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:22.297286   20573 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:22.297308   20573 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:22.297315   20573 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-242000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (115.455283ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:22.466016   20579 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 node stop m03: exit status 85 (154.683529ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-242000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status: exit status 7 (115.462723ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:22.736827   20585 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:22.736839   20585 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr: exit status 7 (114.886228ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:22.801286   20589 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:22.801570   20589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:22.801575   20589 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:22.801579   20589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:22.801776   20589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:22.801962   20589 out.go:298] Setting JSON to false
	I0318 07:01:22.801983   20589 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:22.802026   20589 notify.go:220] Checking for updates...
	I0318 07:01:22.802287   20589 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:22.802302   20589 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:22.802744   20589 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:22.851500   20589 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:22.851559   20589 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:22.851585   20589 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:22.851607   20589 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:22.851847   20589 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (115.211938ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:01:23.020356   20595 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 node start m03 -v=7 --alsologtostderr: exit status 85 (156.734139ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:23.084438   20599 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:23.085767   20599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:23.085774   20599 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:23.085778   20599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:23.085959   20599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:23.086288   20599 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:23.086539   20599 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:23.107629   20599 out.go:177] 
	W0318 07:01:23.129419   20599 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0318 07:01:23.129443   20599 out.go:239] * 
	* 
	W0318 07:01:23.134160   20599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 07:01:23.155237   20599 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0318 07:01:23.084438   20599 out.go:291] Setting OutFile to fd 1 ...
I0318 07:01:23.085767   20599 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 07:01:23.085774   20599 out.go:304] Setting ErrFile to fd 2...
I0318 07:01:23.085778   20599 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 07:01:23.085959   20599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
I0318 07:01:23.086288   20599 mustload.go:65] Loading cluster: multinode-242000
I0318 07:01:23.086539   20599 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 07:01:23.107629   20599 out.go:177] 
W0318 07:01:23.129419   20599 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0318 07:01:23.129443   20599 out.go:239] * 
* 
W0318 07:01:23.134160   20599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 07:01:23.155237   20599 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-242000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (114.945942ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:23.240822   20601 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:23.241477   20601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:23.241487   20601 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:23.241494   20601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:23.242076   20601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:23.242282   20601 out.go:298] Setting JSON to false
	I0318 07:01:23.242308   20601 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:23.242352   20601 notify.go:220] Checking for updates...
	I0318 07:01:23.242563   20601 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:23.242578   20601 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:23.242942   20601 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:23.292339   20601 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:23.292405   20601 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:23.292434   20601 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:23.292459   20601 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:23.292466   20601 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (118.011734ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:24.093430   20605 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:24.093614   20605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:24.093619   20605 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:24.093623   20605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:24.093801   20605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:24.093993   20605 out.go:298] Setting JSON to false
	I0318 07:01:24.094015   20605 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:24.094052   20605 notify.go:220] Checking for updates...
	I0318 07:01:24.095248   20605 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:24.095273   20605 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:24.095637   20605 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:24.144844   20605 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:24.144942   20605 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:24.144965   20605 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:24.144987   20605 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:24.144994   20605 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (115.508452ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:25.831828   20611 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:25.832104   20611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:25.832110   20611 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:25.832114   20611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:25.832299   20611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:25.832479   20611 out.go:298] Setting JSON to false
	I0318 07:01:25.832502   20611 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:25.832540   20611 notify.go:220] Checking for updates...
	I0318 07:01:25.832769   20611 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:25.832786   20611 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:25.833170   20611 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:25.883342   20611 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:25.883446   20611 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:25.883465   20611 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:25.883489   20611 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:25.883497   20611 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (116.754339ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:28.576215   20617 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:28.577031   20617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:28.577040   20617 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:28.577046   20617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:28.577711   20617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:28.577913   20617 out.go:298] Setting JSON to false
	I0318 07:01:28.577938   20617 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:28.577994   20617 notify.go:220] Checking for updates...
	I0318 07:01:28.578201   20617 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:28.578217   20617 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:28.578590   20617 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:28.628005   20617 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:28.628075   20617 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:28.628129   20617 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:28.628155   20617 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:28.628163   20617 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (118.129074ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:31.944048   20621 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:31.944237   20621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:31.944242   20621 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:31.944246   20621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:31.944426   20621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:31.944605   20621 out.go:298] Setting JSON to false
	I0318 07:01:31.944625   20621 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:31.944665   20621 notify.go:220] Checking for updates...
	I0318 07:01:31.944901   20621 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:31.944917   20621 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:31.945330   20621 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:31.995584   20621 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:31.995665   20621 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:31.995694   20621 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:31.995721   20621 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:31.995728   20621 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (119.196417ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:35.807673   20625 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:35.808368   20625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:35.808376   20625 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:35.808382   20625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:35.809005   20625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:35.809194   20625 out.go:298] Setting JSON to false
	I0318 07:01:35.809217   20625 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:35.809260   20625 notify.go:220] Checking for updates...
	I0318 07:01:35.809466   20625 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:35.809483   20625 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:35.809858   20625 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:35.860029   20625 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:35.860093   20625 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:35.860116   20625 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:35.860139   20625 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:35.860148   20625 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (119.76194ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:01:47.169237   20629 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:01:47.170014   20629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:47.170035   20629 out.go:304] Setting ErrFile to fd 2...
	I0318 07:01:47.170145   20629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:01:47.170635   20629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:01:47.170886   20629 out.go:298] Setting JSON to false
	I0318 07:01:47.170911   20629 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:01:47.170949   20629 notify.go:220] Checking for updates...
	I0318 07:01:47.171171   20629 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:01:47.171186   20629 status.go:255] checking status of multinode-242000 ...
	I0318 07:01:47.171569   20629 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:01:47.221704   20629 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:01:47.221784   20629 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:01:47.221809   20629 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:01:47.221838   20629 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:01:47.221846   20629 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (120.917109ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:02:01.844866   20633 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:02:01.845082   20633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:02:01.845088   20633 out.go:304] Setting ErrFile to fd 2...
	I0318 07:02:01.845099   20633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:02:01.845291   20633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:02:01.845463   20633 out.go:298] Setting JSON to false
	I0318 07:02:01.845485   20633 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:02:01.845541   20633 notify.go:220] Checking for updates...
	I0318 07:02:01.845753   20633 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:02:01.845781   20633 status.go:255] checking status of multinode-242000 ...
	I0318 07:02:01.846217   20633 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:01.897318   20633 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:01.897381   20633 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:02:01.897406   20633 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:02:01.897427   20633 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:02:01.897435   20633 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-242000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "59b394342b1c608a62c404a20414c3529c000a49b3e489c87317a061bed16474",
	        "Created": "2024-03-18T13:53:18.04346755Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (114.155537ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:02:02.064464   20639 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (39.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (784.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-242000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-242000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-242000: exit status 82 (10.153611966s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-242000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-242000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr
E0318 07:02:46.080621   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:03:03.025754   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:03:47.240164   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:08:03.023303   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:08:30.285674   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:08:47.238100   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:13:03.020669   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:13:47.235611   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m54.067151823s)

                                                
                                                
-- stdout --
	* [multinode-242000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* docker "multinode-242000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-242000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:02:12.346975   20663 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:02:12.347234   20663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:02:12.347239   20663 out.go:304] Setting ErrFile to fd 2...
	I0318 07:02:12.347243   20663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:02:12.347422   20663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:02:12.348919   20663 out.go:298] Setting JSON to false
	I0318 07:02:12.371264   20663 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":12705,"bootTime":1710757827,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 07:02:12.371361   20663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 07:02:12.393419   20663 out.go:177] * [multinode-242000] minikube v1.32.0 on Darwin 14.3.1
	I0318 07:02:12.456981   20663 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 07:02:12.436031   20663 notify.go:220] Checking for updates...
	I0318 07:02:12.499125   20663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 07:02:12.521159   20663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 07:02:12.541866   20663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 07:02:12.563089   20663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 07:02:12.584909   20663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 07:02:12.606975   20663 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:02:12.607171   20663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 07:02:12.665680   20663 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 07:02:12.665848   20663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:02:12.814987   20663 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:false NGoroutines:140 SystemTime:2024-03-18 14:02:12.791120406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:02:12.857145   20663 out.go:177] * Using the docker driver based on existing profile
	I0318 07:02:12.878505   20663 start.go:297] selected driver: docker
	I0318 07:02:12.878525   20663 start.go:901] validating driver "docker" against &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:02:12.878620   20663 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 07:02:12.878757   20663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:02:12.983755   20663 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:false NGoroutines:140 SystemTime:2024-03-18 14:02:12.972870483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:02:12.986847   20663 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 07:02:12.986914   20663 cni.go:84] Creating CNI manager for ""
	I0318 07:02:12.986924   20663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 07:02:12.986988   20663 start.go:340] cluster config:
	{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:02:13.030341   20663 out.go:177] * Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	I0318 07:02:13.051592   20663 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 07:02:13.073699   20663 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 07:02:13.115551   20663 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:02:13.115617   20663 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 07:02:13.115607   20663 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 07:02:13.115630   20663 cache.go:56] Caching tarball of preloaded images
	I0318 07:02:13.115828   20663 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 07:02:13.115841   20663 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 07:02:13.116421   20663 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/multinode-242000/config.json ...
	I0318 07:02:13.165611   20663 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 07:02:13.165628   20663 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 07:02:13.165656   20663 cache.go:194] Successfully downloaded all kic artifacts
	I0318 07:02:13.165694   20663 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkba9fe2419e9cf6c0347d7f2eb6e7a616348974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:02:13.165776   20663 start.go:364] duration metric: took 66.868µs to acquireMachinesLock for "multinode-242000"
	I0318 07:02:13.165797   20663 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:02:13.165806   20663 fix.go:54] fixHost starting: 
	I0318 07:02:13.166032   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:13.215565   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:13.215620   20663 fix.go:112] recreateIfNeeded on multinode-242000: state= err=unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:13.215651   20663 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:02:13.237502   20663 out.go:177] * docker "multinode-242000" container is missing, will recreate.
	I0318 07:02:13.280160   20663 delete.go:124] DEMOLISHING multinode-242000 ...
	I0318 07:02:13.280351   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:13.330831   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 07:02:13.330880   20663 stop.go:83] unable to get state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:13.330900   20663 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:13.331279   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:13.379908   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:13.379957   20663 delete.go:82] Unable to get host status for multinode-242000, assuming it has already been deleted: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:13.380038   20663 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 07:02:13.429717   20663 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 07:02:13.429758   20663 kic.go:371] could not find the container multinode-242000 to remove it. will try anyways
	I0318 07:02:13.429835   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:13.478858   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 07:02:13.478905   20663 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:13.478979   20663 cli_runner.go:164] Run: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0"
	W0318 07:02:13.527834   20663 cli_runner.go:211] docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:02:13.527867   20663 oci.go:650] error shutdown multinode-242000: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:14.528890   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:14.581913   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:14.581955   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:14.581969   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:14.582006   20663 retry.go:31] will retry after 397.242341ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:14.980012   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:15.030661   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:15.030706   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:15.030714   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:15.030741   20663 retry.go:31] will retry after 709.611367ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:15.742680   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:15.795153   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:15.795196   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:15.795205   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:15.795230   20663 retry.go:31] will retry after 1.329195552s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:17.125269   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:17.178071   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:17.178112   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:17.178131   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:17.178158   20663 retry.go:31] will retry after 1.948594064s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:19.127389   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:19.177203   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:19.177257   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:19.177270   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:19.177292   20663 retry.go:31] will retry after 1.944742524s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:21.123248   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:21.176708   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:21.176751   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:21.176760   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:21.176783   20663 retry.go:31] will retry after 3.995897321s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:25.172866   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:25.223742   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:25.223786   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:25.223794   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:25.223821   20663 retry.go:31] will retry after 5.73135752s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:30.956698   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:02:31.008903   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:02:31.008944   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:02:31.008952   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:02:31.008985   20663 oci.go:88] couldn't shut down multinode-242000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	 
	I0318 07:02:31.009061   20663 cli_runner.go:164] Run: docker rm -f -v multinode-242000
	I0318 07:02:31.058424   20663 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 07:02:31.107535   20663 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 07:02:31.107642   20663 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:02:31.158333   20663 cli_runner.go:164] Run: docker network rm multinode-242000
	I0318 07:02:31.262899   20663 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:02:32.264702   20663 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:02:32.288098   20663 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0318 07:02:32.288259   20663 start.go:159] libmachine.API.Create for "multinode-242000" (driver="docker")
	I0318 07:02:32.288312   20663 client.go:168] LocalClient.Create starting
	I0318 07:02:32.288528   20663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:02:32.288615   20663 main.go:141] libmachine: Decoding PEM data...
	I0318 07:02:32.288652   20663 main.go:141] libmachine: Parsing certificate...
	I0318 07:02:32.288750   20663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:02:32.288819   20663 main.go:141] libmachine: Decoding PEM data...
	I0318 07:02:32.288834   20663 main.go:141] libmachine: Parsing certificate...
	I0318 07:02:32.289727   20663 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:02:32.341345   20663 cli_runner.go:211] docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:02:32.341436   20663 network_create.go:281] running [docker network inspect multinode-242000] to gather additional debugging logs...
	I0318 07:02:32.341454   20663 cli_runner.go:164] Run: docker network inspect multinode-242000
	W0318 07:02:32.390230   20663 cli_runner.go:211] docker network inspect multinode-242000 returned with exit code 1
	I0318 07:02:32.390255   20663 network_create.go:284] error running [docker network inspect multinode-242000]: docker network inspect multinode-242000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242000 not found
	I0318 07:02:32.390267   20663 network_create.go:286] output of [docker network inspect multinode-242000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242000 not found
	
	** /stderr **
	I0318 07:02:32.390395   20663 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:02:32.441685   20663 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:02:32.443149   20663 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:02:32.444488   20663 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:02:32.444851   20663 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00238b800}
	I0318 07:02:32.444869   20663 network_create.go:124] attempt to create docker network multinode-242000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0318 07:02:32.444934   20663 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	I0318 07:02:32.531156   20663 network_create.go:108] docker network multinode-242000 192.168.76.0/24 created
	I0318 07:02:32.531204   20663 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-242000" container
	I0318 07:02:32.531311   20663 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:02:32.581984   20663 cli_runner.go:164] Run: docker volume create multinode-242000 --label name.minikube.sigs.k8s.io=multinode-242000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:02:32.630712   20663 oci.go:103] Successfully created a docker volume multinode-242000
	I0318 07:02:32.630827   20663 cli_runner.go:164] Run: docker run --rm --name multinode-242000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242000 --entrypoint /usr/bin/test -v multinode-242000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:02:32.931125   20663 oci.go:107] Successfully prepared a docker volume multinode-242000
	I0318 07:02:32.931159   20663 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:02:32.931171   20663 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:02:32.931264   20663 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:08:32.286119   20663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:08:32.286250   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:32.339603   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:32.339728   20663 retry.go:31] will retry after 318.23804ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:32.660423   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:32.713621   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:32.713717   20663 retry.go:31] will retry after 337.655064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:33.051664   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:33.104181   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:33.104290   20663 retry.go:31] will retry after 565.784749ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:33.671065   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:33.723136   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:08:33.723247   20663 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:08:33.723275   20663 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:33.723332   20663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:08:33.723392   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:33.773224   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:33.773328   20663 retry.go:31] will retry after 238.36457ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:34.013053   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:34.065874   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:34.065972   20663 retry.go:31] will retry after 458.911205ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:34.525196   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:34.578464   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:34.578563   20663 retry.go:31] will retry after 611.520804ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:35.190732   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:35.241493   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:08:35.241596   20663 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:08:35.241612   20663 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:35.241624   20663 start.go:128] duration metric: took 6m2.979804186s to createHost
	I0318 07:08:35.241708   20663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:08:35.241761   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:35.291283   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:35.291376   20663 retry.go:31] will retry after 311.88166ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:35.605641   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:35.658499   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:35.658590   20663 retry.go:31] will retry after 317.738977ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:35.978693   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:36.033382   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:36.033476   20663 retry.go:31] will retry after 813.090373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:36.848965   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:36.901718   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:08:36.901818   20663 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:08:36.901836   20663 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:36.901892   20663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:08:36.901953   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:36.951160   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:36.951245   20663 retry.go:31] will retry after 244.625693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:37.196625   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:37.249905   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:37.250007   20663 retry.go:31] will retry after 513.060112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:37.764296   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:37.815820   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:08:37.815918   20663 retry.go:31] will retry after 614.317576ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:38.431143   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:08:38.484532   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:08:38.484641   20663 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:08:38.484658   20663 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:38.484666   20663 fix.go:56] duration metric: took 6m25.321949291s for fixHost
	I0318 07:08:38.484672   20663 start.go:83] releasing machines lock for "multinode-242000", held for 6m25.321976786s
	W0318 07:08:38.484688   20663 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 07:08:38.484747   20663 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 07:08:38.484753   20663 start.go:728] Will try again in 5 seconds ...
	I0318 07:08:43.486961   20663 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkba9fe2419e9cf6c0347d7f2eb6e7a616348974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:08:43.487155   20663 start.go:364] duration metric: took 153.165µs to acquireMachinesLock for "multinode-242000"
	I0318 07:08:43.487195   20663 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:08:43.487203   20663 fix.go:54] fixHost starting: 
	I0318 07:08:43.487654   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:43.540726   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:43.540769   20663 fix.go:112] recreateIfNeeded on multinode-242000: state= err=unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:43.540783   20663 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:08:43.562557   20663 out.go:177] * docker "multinode-242000" container is missing, will recreate.
	I0318 07:08:43.584315   20663 delete.go:124] DEMOLISHING multinode-242000 ...
	I0318 07:08:43.584544   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:43.635800   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 07:08:43.635846   20663 stop.go:83] unable to get state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:43.635864   20663 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:43.636237   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:43.685667   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:43.685714   20663 delete.go:82] Unable to get host status for multinode-242000, assuming it has already been deleted: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:43.685785   20663 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 07:08:43.735537   20663 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 07:08:43.735571   20663 kic.go:371] could not find the container multinode-242000 to remove it. will try anyways
	I0318 07:08:43.735641   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:43.785318   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 07:08:43.785364   20663 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:43.785445   20663 cli_runner.go:164] Run: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0"
	W0318 07:08:43.835373   20663 cli_runner.go:211] docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:08:43.835410   20663 oci.go:650] error shutdown multinode-242000: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:44.837016   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:44.888386   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:44.888431   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:44.888440   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:44.888462   20663 retry.go:31] will retry after 478.529704ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:45.368582   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:45.420551   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:45.420598   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:45.420615   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:45.420647   20663 retry.go:31] will retry after 856.573091ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:46.277965   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:46.329674   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:46.329730   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:46.329738   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:46.329758   20663 retry.go:31] will retry after 1.687009288s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:48.017436   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:48.070677   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:48.070724   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:48.070733   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:48.070758   20663 retry.go:31] will retry after 1.579325041s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:49.650403   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:49.703460   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:49.703503   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:49.703516   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:49.703541   20663 retry.go:31] will retry after 2.145985594s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:51.850633   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:51.903616   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:51.903660   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:51.903669   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:51.903695   20663 retry.go:31] will retry after 2.765818538s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:54.669755   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:54.722396   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:54.722450   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:54.722464   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:54.722488   20663 retry.go:31] will retry after 3.314239651s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:58.038041   20663 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:08:58.090756   20663 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:08:58.090804   20663 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:08:58.090812   20663 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:08:58.090838   20663 oci.go:88] couldn't shut down multinode-242000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	 
	I0318 07:08:58.090924   20663 cli_runner.go:164] Run: docker rm -f -v multinode-242000
	I0318 07:08:58.141351   20663 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 07:08:58.190901   20663 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 07:08:58.191007   20663 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:08:58.241724   20663 cli_runner.go:164] Run: docker network rm multinode-242000
	I0318 07:08:58.350839   20663 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:08:59.352106   20663 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:08:59.396040   20663 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0318 07:08:59.396221   20663 start.go:159] libmachine.API.Create for "multinode-242000" (driver="docker")
	I0318 07:08:59.396250   20663 client.go:168] LocalClient.Create starting
	I0318 07:08:59.396495   20663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:08:59.396589   20663 main.go:141] libmachine: Decoding PEM data...
	I0318 07:08:59.396614   20663 main.go:141] libmachine: Parsing certificate...
	I0318 07:08:59.396690   20663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:08:59.396770   20663 main.go:141] libmachine: Decoding PEM data...
	I0318 07:08:59.396786   20663 main.go:141] libmachine: Parsing certificate...
	I0318 07:08:59.397547   20663 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:08:59.450887   20663 cli_runner.go:211] docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:08:59.450980   20663 network_create.go:281] running [docker network inspect multinode-242000] to gather additional debugging logs...
	I0318 07:08:59.450999   20663 cli_runner.go:164] Run: docker network inspect multinode-242000
	W0318 07:08:59.501781   20663 cli_runner.go:211] docker network inspect multinode-242000 returned with exit code 1
	I0318 07:08:59.501812   20663 network_create.go:284] error running [docker network inspect multinode-242000]: docker network inspect multinode-242000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242000 not found
	I0318 07:08:59.501824   20663 network_create.go:286] output of [docker network inspect multinode-242000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242000 not found
	
	** /stderr **
	I0318 07:08:59.501950   20663 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:08:59.554280   20663 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:08:59.555639   20663 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:08:59.557263   20663 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:08:59.558838   20663 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:08:59.559180   20663 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002519dc0}
	I0318 07:08:59.559191   20663 network_create.go:124] attempt to create docker network multinode-242000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0318 07:08:59.559274   20663 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	W0318 07:08:59.608761   20663 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000 returned with exit code 1
	W0318 07:08:59.608798   20663 network_create.go:149] failed to create docker network multinode-242000 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0318 07:08:59.608816   20663 network_create.go:116] failed to create docker network multinode-242000 192.168.85.0/24, will retry: subnet is taken
	I0318 07:08:59.610178   20663 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:08:59.610560   20663 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024ae210}
	I0318 07:08:59.610572   20663 network_create.go:124] attempt to create docker network multinode-242000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0318 07:08:59.610640   20663 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	I0318 07:08:59.696923   20663 network_create.go:108] docker network multinode-242000 192.168.94.0/24 created
	I0318 07:08:59.696958   20663 kic.go:121] calculated static IP "192.168.94.2" for the "multinode-242000" container
	I0318 07:08:59.697061   20663 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:08:59.747031   20663 cli_runner.go:164] Run: docker volume create multinode-242000 --label name.minikube.sigs.k8s.io=multinode-242000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:08:59.796508   20663 oci.go:103] Successfully created a docker volume multinode-242000
	I0318 07:08:59.796629   20663 cli_runner.go:164] Run: docker run --rm --name multinode-242000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242000 --entrypoint /usr/bin/test -v multinode-242000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:09:00.106696   20663 oci.go:107] Successfully prepared a docker volume multinode-242000
	I0318 07:09:00.106731   20663 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:09:00.106744   20663 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:09:00.106834   20663 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0318 07:14:59.395768   20663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:14:59.395895   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:14:59.450868   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:14:59.450982   20663 retry.go:31] will retry after 160.261537ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:14:59.611582   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:14:59.664246   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:14:59.664346   20663 retry.go:31] will retry after 240.067871ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:14:59.905744   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:14:59.960041   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:14:59.960168   20663 retry.go:31] will retry after 695.585941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:00.658158   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:00.709765   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:15:00.709871   20663 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:15:00.709890   20663 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:00.709954   20663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:15:00.710006   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:00.760173   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:00.760267   20663 retry.go:31] will retry after 311.559011ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:01.072664   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:01.127189   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:01.127295   20663 retry.go:31] will retry after 210.257055ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:01.338120   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:01.391283   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:01.391384   20663 retry.go:31] will retry after 296.584773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:01.688293   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:01.741484   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:01.741592   20663 retry.go:31] will retry after 666.829745ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:02.410766   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:02.462705   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:15:02.462831   20663 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:15:02.462851   20663 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:02.462863   20663 start.go:128] duration metric: took 6m3.113626345s to createHost
	I0318 07:15:02.462928   20663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 07:15:02.462982   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:02.512668   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:02.512764   20663 retry.go:31] will retry after 360.502899ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:02.875547   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:02.929097   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:02.929193   20663 retry.go:31] will retry after 330.20481ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:03.260887   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:03.313560   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:03.313653   20663 retry.go:31] will retry after 310.175252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:03.624931   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:03.678371   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:03.678471   20663 retry.go:31] will retry after 764.744132ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:04.444817   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:04.497076   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:15:04.497175   20663 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:15:04.497188   20663 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:04.497254   20663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0318 07:15:04.497318   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:04.547155   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:04.547250   20663 retry.go:31] will retry after 344.040804ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:04.891859   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:04.945981   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:04.946069   20663 retry.go:31] will retry after 416.717298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:05.364174   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:05.416445   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	I0318 07:15:05.416541   20663 retry.go:31] will retry after 702.639356ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:06.121532   20663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000
	W0318 07:15:06.174419   20663 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000 returned with exit code 1
	W0318 07:15:06.174521   20663 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	W0318 07:15:06.174553   20663 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-242000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-242000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:06.174563   20663 fix.go:56] duration metric: took 6m22.690428217s for fixHost
	I0318 07:15:06.174569   20663 start.go:83] releasing machines lock for "multinode-242000", held for 6m22.690469143s
	W0318 07:15:06.174652   20663 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-242000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-242000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0318 07:15:06.217934   20663 out.go:177] 
	W0318 07:15:06.239161   20663 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0318 07:15:06.239211   20663 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0318 07:15:06.239236   20663 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0318 07:15:06.260447   20663 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-242000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-242000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "035b63fb35f76d51a1e0d7e3e0e0a7ac8e026d9b47173bdc0feefba8f4be9b2d",
	        "Created": "2024-03-18T14:08:59.657423009Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (115.610438ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:15:06.596794   20997 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (784.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 node delete m03: exit status 80 (202.405829ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-242000 host status: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-242000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr: exit status 7 (115.734862ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:15:06.863867   21005 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:15:06.864134   21005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:15:06.864140   21005 out.go:304] Setting ErrFile to fd 2...
	I0318 07:15:06.864143   21005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:15:06.864331   21005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:15:06.864511   21005 out.go:298] Setting JSON to false
	I0318 07:15:06.864531   21005 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:15:06.864576   21005 notify.go:220] Checking for updates...
	I0318 07:15:06.864828   21005 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:15:06.864844   21005 status.go:255] checking status of multinode-242000 ...
	I0318 07:15:06.865268   21005 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:06.915187   21005 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:06.915242   21005 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:15:06.915266   21005 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:15:06.915287   21005 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:15:06.915295   21005 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "035b63fb35f76d51a1e0d7e3e0e0a7ac8e026d9b47173bdc0feefba8f4be9b2d",
	        "Created": "2024-03-18T14:08:59.657423009Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (116.419635ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:15:07.084424   21011 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 stop: exit status 82 (15.725377221s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	* Stopping node "multinode-242000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-242000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-242000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status: exit status 7 (116.238866ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:15:22.926119   21036 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:15:22.926132   21036 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr: exit status 7 (115.597464ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:15:22.990064   21040 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:15:22.990332   21040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:15:22.990338   21040 out.go:304] Setting ErrFile to fd 2...
	I0318 07:15:22.990341   21040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:15:22.990518   21040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:15:22.990691   21040 out.go:298] Setting JSON to false
	I0318 07:15:22.990712   21040 mustload.go:65] Loading cluster: multinode-242000
	I0318 07:15:22.990755   21040 notify.go:220] Checking for updates...
	I0318 07:15:22.990979   21040 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:15:22.990994   21040 status.go:255] checking status of multinode-242000 ...
	I0318 07:15:22.991431   21040 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:23.041771   21040 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:23.041846   21040 status.go:330] multinode-242000 host status = "" (err=state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	)
	I0318 07:15:23.041865   21040 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 07:15:23.041889   21040 status.go:260] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	E0318 07:15:23.041897   21040 status.go:263] The "multinode-242000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "035b63fb35f76d51a1e0d7e3e0e0a7ac8e026d9b47173bdc0feefba8f4be9b2d",
	        "Created": "2024-03-18T14:08:59.657423009Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (115.475812ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:15:23.210452   21046 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (16.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m28.288534092s)

                                                
                                                
-- stdout --
	* [multinode-242000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* docker "multinode-242000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 07:15:23.273268   21050 out.go:291] Setting OutFile to fd 1 ...
	I0318 07:15:23.273937   21050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:15:23.273946   21050 out.go:304] Setting ErrFile to fd 2...
	I0318 07:15:23.273952   21050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 07:15:23.274499   21050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 07:15:23.276107   21050 out.go:298] Setting JSON to false
	I0318 07:15:23.298705   21050 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":13496,"bootTime":1710757827,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 07:15:23.298797   21050 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 07:15:23.320395   21050 out.go:177] * [multinode-242000] minikube v1.32.0 on Darwin 14.3.1
	I0318 07:15:23.383186   21050 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 07:15:23.362271   21050 notify.go:220] Checking for updates...
	I0318 07:15:23.425079   21050 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 07:15:23.446220   21050 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 07:15:23.467646   21050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 07:15:23.489151   21050 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 07:15:23.510146   21050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 07:15:23.531939   21050 config.go:182] Loaded profile config "multinode-242000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 07:15:23.532780   21050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 07:15:23.589015   21050 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 07:15:23.589177   21050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:15:23.690582   21050 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:6 ContainersRunning:2 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:160 SystemTime:2024-03-18 14:15:23.68032614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:15:23.732958   21050 out.go:177] * Using the docker driver based on existing profile
	I0318 07:15:23.753745   21050 start.go:297] selected driver: docker
	I0318 07:15:23.753769   21050 start.go:901] validating driver "docker" against &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:15:23.753894   21050 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 07:15:23.754087   21050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 07:15:23.856326   21050 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:6 ContainersRunning:2 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:160 SystemTime:2024-03-18 14:15:23.845453831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 07:15:23.859533   21050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 07:15:23.859600   21050 cni.go:84] Creating CNI manager for ""
	I0318 07:15:23.859609   21050 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 07:15:23.859675   21050 start.go:340] cluster config:
	{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 07:15:23.901786   21050 out.go:177] * Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	I0318 07:15:23.922965   21050 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 07:15:23.944562   21050 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0318 07:15:23.986877   21050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:15:23.986933   21050 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 07:15:23.986971   21050 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 07:15:23.986991   21050 cache.go:56] Caching tarball of preloaded images
	I0318 07:15:23.987219   21050 preload.go:173] Found /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 07:15:23.987240   21050 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 07:15:23.988250   21050 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/multinode-242000/config.json ...
	I0318 07:15:24.038587   21050 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 07:15:24.038602   21050 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0318 07:15:24.038631   21050 cache.go:194] Successfully downloaded all kic artifacts
	I0318 07:15:24.038670   21050 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkba9fe2419e9cf6c0347d7f2eb6e7a616348974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 07:15:24.038863   21050 start.go:364] duration metric: took 169.214µs to acquireMachinesLock for "multinode-242000"
	I0318 07:15:24.038886   21050 start.go:96] Skipping create...Using existing machine configuration
	I0318 07:15:24.038897   21050 fix.go:54] fixHost starting: 
	I0318 07:15:24.039139   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:24.088925   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:24.088986   21050 fix.go:112] recreateIfNeeded on multinode-242000: state= err=unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:24.089009   21050 fix.go:117] machineExists: false. err=machine does not exist
	I0318 07:15:24.110604   21050 out.go:177] * docker "multinode-242000" container is missing, will recreate.
	I0318 07:15:24.132450   21050 delete.go:124] DEMOLISHING multinode-242000 ...
	I0318 07:15:24.132637   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:24.184033   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 07:15:24.184080   21050 stop.go:83] unable to get state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:24.184106   21050 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:24.184473   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:24.234652   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:24.234701   21050 delete.go:82] Unable to get host status for multinode-242000, assuming it has already been deleted: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:24.234784   21050 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 07:15:24.284456   21050 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 07:15:24.284497   21050 kic.go:371] could not find the container multinode-242000 to remove it. will try anyways
	I0318 07:15:24.284578   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:24.334461   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	W0318 07:15:24.334506   21050 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:24.334592   21050 cli_runner.go:164] Run: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0"
	W0318 07:15:24.384245   21050 cli_runner.go:211] docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0318 07:15:24.384276   21050 oci.go:650] error shutdown multinode-242000: docker exec --privileged -t multinode-242000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:25.384893   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:25.438234   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:25.438290   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:25.438303   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:25.438334   21050 retry.go:31] will retry after 404.159974ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:25.844840   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:25.899385   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:25.899429   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:25.899438   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:25.899461   21050 retry.go:31] will retry after 646.801368ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:26.547614   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:26.601681   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:26.601726   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:26.601742   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:26.601766   21050 retry.go:31] will retry after 1.057581205s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:27.661703   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:27.714197   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:27.714250   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:27.714258   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:27.714279   21050 retry.go:31] will retry after 876.363101ms: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:28.591026   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:28.642699   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:28.642741   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:28.642762   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:28.642797   21050 retry.go:31] will retry after 2.323194907s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:30.968350   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:31.020489   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:31.020531   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:31.020539   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:31.020562   21050 retry.go:31] will retry after 3.230673555s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:34.251970   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:34.304603   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:34.304648   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:34.304657   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:34.304677   21050 retry.go:31] will retry after 8.530687621s: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:42.837742   21050 cli_runner.go:164] Run: docker container inspect multinode-242000 --format={{.State.Status}}
	W0318 07:15:42.889525   21050 cli_runner.go:211] docker container inspect multinode-242000 --format={{.State.Status}} returned with exit code 1
	I0318 07:15:42.889571   21050 oci.go:662] temporary error verifying shutdown: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	I0318 07:15:42.889587   21050 oci.go:664] temporary error: container multinode-242000 status is  but expect it to be exited
	I0318 07:15:42.889621   21050 oci.go:88] couldn't shut down multinode-242000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000
	 
	I0318 07:15:42.889697   21050 cli_runner.go:164] Run: docker rm -f -v multinode-242000
	I0318 07:15:42.940297   21050 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-242000
	W0318 07:15:42.990037   21050 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-242000 returned with exit code 1
	I0318 07:15:42.990147   21050 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:15:43.040212   21050 cli_runner.go:164] Run: docker network rm multinode-242000
	I0318 07:15:43.162928   21050 fix.go:124] Sleeping 1 second for extra luck!
	I0318 07:15:44.163339   21050 start.go:125] createHost starting for "" (driver="docker")
	I0318 07:15:44.185554   21050 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0318 07:15:44.185745   21050 start.go:159] libmachine.API.Create for "multinode-242000" (driver="docker")
	I0318 07:15:44.185786   21050 client.go:168] LocalClient.Create starting
	I0318 07:15:44.186018   21050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/ca.pem
	I0318 07:15:44.186098   21050 main.go:141] libmachine: Decoding PEM data...
	I0318 07:15:44.186127   21050 main.go:141] libmachine: Parsing certificate...
	I0318 07:15:44.186208   21050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-11233/.minikube/certs/cert.pem
	I0318 07:15:44.186264   21050 main.go:141] libmachine: Decoding PEM data...
	I0318 07:15:44.186275   21050 main.go:141] libmachine: Parsing certificate...
	I0318 07:15:44.207938   21050 cli_runner.go:164] Run: docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0318 07:15:44.261837   21050 cli_runner.go:211] docker network inspect multinode-242000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0318 07:15:44.261931   21050 network_create.go:281] running [docker network inspect multinode-242000] to gather additional debugging logs...
	I0318 07:15:44.261950   21050 cli_runner.go:164] Run: docker network inspect multinode-242000
	W0318 07:15:44.313163   21050 cli_runner.go:211] docker network inspect multinode-242000 returned with exit code 1
	I0318 07:15:44.313194   21050 network_create.go:284] error running [docker network inspect multinode-242000]: docker network inspect multinode-242000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-242000 not found
	I0318 07:15:44.313204   21050 network_create.go:286] output of [docker network inspect multinode-242000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-242000 not found
	
	** /stderr **
	I0318 07:15:44.313337   21050 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0318 07:15:44.365125   21050 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:15:44.366651   21050 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:15:44.368122   21050 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0318 07:15:44.368629   21050 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00090f040}
	I0318 07:15:44.368648   21050 network_create.go:124] attempt to create docker network multinode-242000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0318 07:15:44.368746   21050 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-242000 multinode-242000
	I0318 07:15:44.455359   21050 network_create.go:108] docker network multinode-242000 192.168.76.0/24 created
	I0318 07:15:44.455402   21050 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-242000" container
	I0318 07:15:44.455512   21050 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0318 07:15:44.507819   21050 cli_runner.go:164] Run: docker volume create multinode-242000 --label name.minikube.sigs.k8s.io=multinode-242000 --label created_by.minikube.sigs.k8s.io=true
	I0318 07:15:44.558197   21050 oci.go:103] Successfully created a docker volume multinode-242000
	I0318 07:15:44.558320   21050 cli_runner.go:164] Run: docker run --rm --name multinode-242000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-242000 --entrypoint /usr/bin/test -v multinode-242000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0318 07:15:44.859434   21050 oci.go:107] Successfully prepared a docker volume multinode-242000
	I0318 07:15:44.859502   21050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 07:15:44.859515   21050 kic.go:194] Starting extracting preloaded images to volume ...
	I0318 07:15:44.859616   21050 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-242000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-242000
helpers_test.go:235: (dbg) docker inspect multinode-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-242000",
	        "Id": "57c541933ac7936b757db3f02a2a7df55bc73b5cfd23560ce36f4949bebe55f9",
	        "Created": "2024-03-18T14:15:44.416126331Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-242000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (116.390505ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:16:51.672560   21141 status.go:249] status error: host: state: unknown state "multinode-242000": docker container inspect multinode-242000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-242000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (88.46s)

                                                
                                    
x
+
TestScheduledStopUnix (300.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-911000 --memory=2048 --driver=docker 
E0318 07:19:26.123801   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:23:03.067415   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:23:47.280670   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-911000 --memory=2048 --driver=docker : signal: killed (5m0.002588596s)

                                                
                                                
-- stdout --
	* [scheduled-stop-911000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-911000" primary control-plane node in "scheduled-stop-911000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-911000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-911000" primary control-plane node in "scheduled-stop-911000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-18 07:24:04.443758 -0700 PDT m=+4874.097999327
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-911000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-911000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-911000",
	        "Id": "d9e2a5e04caeb3334ab8b43554767c74853eb09473d4beb946d216bae1af57f9",
	        "Created": "2024-03-18T14:19:05.576125877Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-911000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-911000 -n scheduled-stop-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-911000 -n scheduled-stop-911000: exit status 7 (116.832712ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:24:04.615048   21608 status.go:249] status error: host: state: unknown state "scheduled-stop-911000": docker container inspect scheduled-stop-911000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-911000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-911000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-911000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-911000
--- FAIL: TestScheduledStopUnix (300.92s)

                                                
                                    
x
+
TestSkaffold (300.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe4293920000 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-483000 --memory=2600 --driver=docker 
E0318 07:25:10.328383   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 07:28:03.063361   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:28:47.277730   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-483000 --memory=2600 --driver=docker : signal: killed (4m52.568129113s)

                                                
                                                
-- stdout --
	* [skaffold-483000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-483000" primary control-plane node in "skaffold-483000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-483000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-483000" primary control-plane node in "skaffold-483000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-03-18 07:29:05.37085 -0700 PDT m=+5175.028008952
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-483000
helpers_test.go:235: (dbg) docker inspect skaffold-483000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-483000",
	        "Id": "91d007c8f0b7fad40e41444b20ecbc05c8a0e042f887b28288fbc195c029e658",
	        "Created": "2024-03-18T14:24:13.912130717Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-483000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-483000 -n skaffold-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-483000 -n skaffold-483000: exit status 7 (116.287709ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 07:29:05.538352   21719 status.go:249] status error: host: state: unknown state "skaffold-483000": docker container inspect skaffold-483000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-483000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-483000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-483000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-483000
--- FAIL: TestSkaffold (300.92s)

                                                
                                    
x
+
TestInsufficientStorage (300.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-312000 --memory=2048 --output=json --wait=true --driver=docker 
E0318 07:33:03.060785   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 07:33:47.275126   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-312000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004177751s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d747d9e-8126-4c10-8351-ebad4330c6a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-312000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"148f6900-eab0-4000-a2ae-aaa9fcd5b98d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"72221fa4-58c7-43d0-8ccc-a1ba1b474b24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig"}}
	{"specversion":"1.0","id":"03492d4f-4c87-4d50-9d69-b7bce243a661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7b43b56d-f919-4a1c-aba6-e485785eace0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27e8e02d-b254-482a-9cda-fbedd4aed8bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube"}}
	{"specversion":"1.0","id":"4d12c5b5-1dde-4d53-897d-56330683e164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e7a81e3-a81e-4631-891c-ac65f57238de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"020d4175-2384-415e-a3fa-527906eb8381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2647c589-1df3-4741-b2c2-2516a4d01538","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3139d7ca-f7fe-42bc-8d6b-e8e24ca9825c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ed5471fe-f8c2-48a0-a984-56095b0a4673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-312000\" primary control-plane node in \"insufficient-storage-312000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"420de457-a77c-43ee-8d11-abf9be579b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d39e7980-1fd8-4024-bef0-9076b2f0e7cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-312000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-312000 --output=json --layout=cluster: context deadline exceeded (944ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-312000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-312000
--- FAIL: TestInsufficientStorage (300.75s)

                                                
                                    

Test pass (170/211)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.4
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.35
9 TestDownloadOnly/v1.20.0/DeleteAll 13.98
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.38
12 TestDownloadOnly/v1.28.4/json-events 21.77
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.3
18 TestDownloadOnly/v1.28.4/DeleteAll 13.98
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.29.0-rc.2/json-events 18.44
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.32
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 13.99
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.38
29 TestDownloadOnlyKic 2
30 TestBinaryMirror 1.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 200.98
40 TestAddons/parallel/InspektorGadget 11.95
41 TestAddons/parallel/MetricsServer 7.47
42 TestAddons/parallel/HelmTiller 10.9
44 TestAddons/parallel/CSI 44.11
45 TestAddons/parallel/Headlamp 12.64
46 TestAddons/parallel/CloudSpanner 5.74
47 TestAddons/parallel/LocalPath 54.01
48 TestAddons/parallel/NvidiaDevicePlugin 5.69
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 11.7
61 TestHyperKitDriverInstallOrUpdate 6.14
64 TestErrorSpam/setup 22.41
65 TestErrorSpam/start 2.13
66 TestErrorSpam/status 1.33
67 TestErrorSpam/pause 1.77
68 TestErrorSpam/unpause 1.91
69 TestErrorSpam/stop 2.81
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 67.64
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 28.94
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.91
81 TestFunctional/serial/CacheCmd/cache/add_local 1.62
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.44
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 0.56
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
89 TestFunctional/serial/ExtraConfig 38.27
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.13
92 TestFunctional/serial/LogsFileCmd 3.17
93 TestFunctional/serial/InvalidService 4.26
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 14.21
97 TestFunctional/parallel/DryRun 1.46
98 TestFunctional/parallel/InternationalLanguage 0.71
99 TestFunctional/parallel/StatusCmd 1.34
104 TestFunctional/parallel/AddonsCmd 0.29
105 TestFunctional/parallel/PersistentVolumeClaim 26.16
107 TestFunctional/parallel/SSHCmd 0.85
108 TestFunctional/parallel/CpCmd 2.45
109 TestFunctional/parallel/MySQL 30.66
110 TestFunctional/parallel/FileSync 0.47
111 TestFunctional/parallel/CertSync 2.81
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
119 TestFunctional/parallel/License 1.54
120 TestFunctional/parallel/Version/short 0.15
121 TestFunctional/parallel/Version/components 0.67
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.15
127 TestFunctional/parallel/ImageCommands/Setup 5.33
128 TestFunctional/parallel/DockerEnv/bash 1.9
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.45
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.16
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.52
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.69
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.35
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.19
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.04
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
150 TestFunctional/parallel/ServiceCmd/DeployApp 7.12
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
152 TestFunctional/parallel/ProfileCmd/profile_list 0.57
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
154 TestFunctional/parallel/MountCmd/any-port 11.64
155 TestFunctional/parallel/ServiceCmd/List 1.04
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.06
157 TestFunctional/parallel/ServiceCmd/HTTPS 15
158 TestFunctional/parallel/MountCmd/specific-port 2.26
159 TestFunctional/parallel/MountCmd/VerifyCleanup 2.52
160 TestFunctional/parallel/ServiceCmd/Format 15
161 TestFunctional/parallel/ServiceCmd/URL 15
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestMultiControlPlane/serial/StartCluster 187.52
169 TestMultiControlPlane/serial/DeployApp 9.87
170 TestMultiControlPlane/serial/PingHostFromPods 1.5
171 TestMultiControlPlane/serial/AddWorkerNode 21.03
172 TestMultiControlPlane/serial/NodeLabels 0.06
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.22
174 TestMultiControlPlane/serial/CopyFile 26.9
175 TestMultiControlPlane/serial/StopSecondaryNode 11.95
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.9
177 TestMultiControlPlane/serial/RestartSecondaryNode 23.19
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.25
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 191.22
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.57
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.86
182 TestMultiControlPlane/serial/StopCluster 33.18
183 TestMultiControlPlane/serial/RestartCluster 61.09
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.9
185 TestMultiControlPlane/serial/AddSecondaryNode 43.93
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.22
189 TestImageBuild/serial/Setup 23.24
190 TestImageBuild/serial/NormalBuild 4.82
191 TestImageBuild/serial/BuildWithBuildArg 1.23
192 TestImageBuild/serial/BuildWithDockerIgnore 1.07
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.05
197 TestJSONOutput/start/Command 37.49
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.61
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.61
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 5.77
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.78
222 TestKicCustomNetwork/create_custom_network 25.5
223 TestKicCustomNetwork/use_default_bridge_network 25.08
224 TestKicExistingNetwork 25.51
225 TestKicCustomSubnet 25.64
226 TestKicStaticIP 24.87
227 TestMainNoArgs 0.09
228 TestMinikubeProfile 53
231 TestMountStart/serial/StartWithMountFirst 8.17
232 TestMountStart/serial/VerifyMountFirst 0.4
233 TestMountStart/serial/StartWithMountSecond 8.4
253 TestPreload 131.81
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 19.91
275 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 22.64
x
+
TestDownloadOnly/v1.20.0/json-events (22.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (22.396673995s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-571000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-571000: exit status 85 (344.63577ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-571000 | jenkins | v1.32.0 | 18 Mar 24 06:02 PDT |          |
	|         | -p download-only-571000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 06:02:50
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 06:02:50.326817   11709 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:02:50.327007   11709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:02:50.327013   11709 out.go:304] Setting ErrFile to fd 2...
	I0318 06:02:50.327017   11709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:02:50.327202   11709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	W0318 06:02:50.327332   11709 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18429-11233/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18429-11233/.minikube/config/config.json: no such file or directory
	I0318 06:02:50.329075   11709 out.go:298] Setting JSON to true
	I0318 06:02:50.353081   11709 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9143,"bootTime":1710757827,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 06:02:50.353177   11709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 06:02:50.377182   11709 out.go:97] [download-only-571000] minikube v1.32.0 on Darwin 14.3.1
	I0318 06:02:50.399146   11709 out.go:169] MINIKUBE_LOCATION=18429
	I0318 06:02:50.377401   11709 notify.go:220] Checking for updates...
	W0318 06:02:50.377383   11709 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 06:02:50.442997   11709 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 06:02:50.486154   11709 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 06:02:50.509178   11709 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 06:02:50.531186   11709 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	W0318 06:02:50.573144   11709 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 06:02:50.573662   11709 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 06:02:50.630330   11709 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 06:02:50.630471   11709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:02:50.737245   11709 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-18 13:02:50.727094419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:02:50.759670   11709 out.go:97] Using the docker driver based on user configuration
	I0318 06:02:50.759733   11709 start.go:297] selected driver: docker
	I0318 06:02:50.759742   11709 start.go:901] validating driver "docker" against <nil>
	I0318 06:02:50.759900   11709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:02:50.862399   11709 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-18 13:02:50.85269863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:02:50.862610   11709 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 06:02:50.867150   11709 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0318 06:02:50.867536   11709 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 06:02:50.889224   11709 out.go:169] Using Docker Desktop driver with root privileges
	I0318 06:02:50.911000   11709 cni.go:84] Creating CNI manager for ""
	I0318 06:02:50.911044   11709 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 06:02:50.911170   11709 start.go:340] cluster config:
	{Name:download-only-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 06:02:50.933206   11709 out.go:97] Starting "download-only-571000" primary control-plane node in "download-only-571000" cluster
	I0318 06:02:50.933250   11709 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 06:02:50.955013   11709 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0318 06:02:50.955046   11709 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 06:02:50.955113   11709 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 06:02:51.004895   11709 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 06:02:51.004937   11709 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0318 06:02:51.005196   11709 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0318 06:02:51.005338   11709 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0318 06:02:51.569811   11709 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0318 06:02:51.569831   11709 cache.go:56] Caching tarball of preloaded images
	I0318 06:02:51.570067   11709 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 06:02:51.591999   11709 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 06:02:51.592058   11709 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:02:52.134694   11709 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0318 06:03:08.448196   11709 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:03:08.448373   11709 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:03:09.002181   11709 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 06:03:09.002428   11709 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/download-only-571000/config.json ...
	I0318 06:03:09.002452   11709 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/download-only-571000/config.json: {Name:mk733bd1e27214c975f72d93e77f58f549a50e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 06:03:09.002720   11709 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 06:03:09.003008   11709 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-571000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-571000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (13.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-darwin-amd64 delete --all: (13.978921773s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (13.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-571000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (21.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-577000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-577000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (21.772714554s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (21.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-577000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-577000: exit status 85 (300.320637ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-571000 | jenkins | v1.32.0 | 18 Mar 24 06:02 PDT |                     |
	|         | -p download-only-571000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT | 18 Mar 24 06:03 PDT |
	| delete  | -p download-only-571000        | download-only-571000 | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT | 18 Mar 24 06:03 PDT |
	| start   | -o=json --download-only        | download-only-577000 | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT |                     |
	|         | -p download-only-577000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 06:03:27
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 06:03:27.429053   11807 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:03:27.429228   11807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:03:27.429234   11807 out.go:304] Setting ErrFile to fd 2...
	I0318 06:03:27.429237   11807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:03:27.429424   11807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:03:27.430781   11807 out.go:298] Setting JSON to true
	I0318 06:03:27.452723   11807 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9180,"bootTime":1710757827,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 06:03:27.452822   11807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 06:03:27.473979   11807 out.go:97] [download-only-577000] minikube v1.32.0 on Darwin 14.3.1
	I0318 06:03:27.495044   11807 out.go:169] MINIKUBE_LOCATION=18429
	I0318 06:03:27.474095   11807 notify.go:220] Checking for updates...
	I0318 06:03:27.538942   11807 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 06:03:27.582057   11807 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 06:03:27.604024   11807 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 06:03:27.626159   11807 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	W0318 06:03:27.669727   11807 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 06:03:27.670232   11807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 06:03:27.725147   11807 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 06:03:27.725296   11807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:03:27.827966   11807 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-18 13:03:27.817794757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:03:27.849213   11807 out.go:97] Using the docker driver based on user configuration
	I0318 06:03:27.849244   11807 start.go:297] selected driver: docker
	I0318 06:03:27.849252   11807 start.go:901] validating driver "docker" against <nil>
	I0318 06:03:27.849407   11807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:03:27.951922   11807 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-18 13:03:27.942176855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:03:27.952094   11807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 06:03:27.955016   11807 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0318 06:03:27.955169   11807 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 06:03:27.977068   11807 out.go:169] Using Docker Desktop driver with root privileges
	I0318 06:03:27.998885   11807 cni.go:84] Creating CNI manager for ""
	I0318 06:03:27.998930   11807 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 06:03:27.998948   11807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 06:03:27.999108   11807 start.go:340] cluster config:
	{Name:download-only-577000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-577000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 06:03:28.020885   11807 out.go:97] Starting "download-only-577000" primary control-plane node in "download-only-577000" cluster
	I0318 06:03:28.020928   11807 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 06:03:28.043892   11807 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0318 06:03:28.043937   11807 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 06:03:28.044029   11807 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 06:03:28.094281   11807 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 06:03:28.094308   11807 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0318 06:03:28.094447   11807 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0318 06:03:28.094465   11807 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0318 06:03:28.094472   11807 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0318 06:03:28.094481   11807 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0318 06:03:28.307353   11807 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 06:03:28.307403   11807 cache.go:56] Caching tarball of preloaded images
	I0318 06:03:28.307667   11807 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 06:03:28.329401   11807 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 06:03:28.329430   11807 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:03:28.879792   11807 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 06:03:45.906913   11807 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:03:45.907150   11807 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:03:46.489219   11807 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 06:03:46.489465   11807 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/download-only-577000/config.json ...
	I0318 06:03:46.489492   11807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/download-only-577000/config.json: {Name:mkf49a48771e6e600d7fa5c4208d18c65fdcf06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 06:03:46.489814   11807 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 06:03:46.490026   11807 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-577000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-577000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (13.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-darwin-amd64 delete --all: (13.983825242s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (13.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-577000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (18.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-034000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-034000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (18.440229255s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (18.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-034000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-034000: exit status 85 (320.491035ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-571000 | jenkins | v1.32.0 | 18 Mar 24 06:02 PDT |                     |
	|         | -p download-only-571000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT | 18 Mar 24 06:03 PDT |
	| delete  | -p download-only-571000           | download-only-571000 | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT | 18 Mar 24 06:03 PDT |
	| start   | -o=json --download-only           | download-only-577000 | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT |                     |
	|         | -p download-only-577000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 06:03 PDT | 18 Mar 24 06:04 PDT |
	| delete  | -p download-only-577000           | download-only-577000 | jenkins | v1.32.0 | 18 Mar 24 06:04 PDT | 18 Mar 24 06:04 PDT |
	| start   | -o=json --download-only           | download-only-034000 | jenkins | v1.32.0 | 18 Mar 24 06:04 PDT |                     |
	|         | -p download-only-034000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 06:04:03
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 06:04:03.864464   11895 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:04:03.864712   11895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:04:03.864717   11895 out.go:304] Setting ErrFile to fd 2...
	I0318 06:04:03.864721   11895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:04:03.864900   11895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:04:03.866325   11895 out.go:298] Setting JSON to true
	I0318 06:04:03.888242   11895 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9216,"bootTime":1710757827,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 06:04:03.888336   11895 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 06:04:03.909721   11895 out.go:97] [download-only-034000] minikube v1.32.0 on Darwin 14.3.1
	I0318 06:04:03.932338   11895 out.go:169] MINIKUBE_LOCATION=18429
	I0318 06:04:03.909872   11895 notify.go:220] Checking for updates...
	I0318 06:04:03.975179   11895 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 06:04:03.996589   11895 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 06:04:04.018442   11895 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 06:04:04.039329   11895 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	W0318 06:04:04.086264   11895 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 06:04:04.086701   11895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 06:04:04.142897   11895 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 06:04:04.143044   11895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:04:04.246375   11895 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-18 13:04:04.236407862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:04:04.268218   11895 out.go:97] Using the docker driver based on user configuration
	I0318 06:04:04.268262   11895 start.go:297] selected driver: docker
	I0318 06:04:04.268274   11895 start.go:901] validating driver "docker" against <nil>
	I0318 06:04:04.268490   11895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:04:04.369318   11895 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:115 SystemTime:2024-03-18 13:04:04.359506095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:04:04.369507   11895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 06:04:04.372419   11895 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0318 06:04:04.372568   11895 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 06:04:04.393742   11895 out.go:169] Using Docker Desktop driver with root privileges
	I0318 06:04:04.414629   11895 cni.go:84] Creating CNI manager for ""
	I0318 06:04:04.414677   11895 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 06:04:04.414707   11895 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 06:04:04.414812   11895 start.go:340] cluster config:
	{Name:download-only-034000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 06:04:04.435991   11895 out.go:97] Starting "download-only-034000" primary control-plane node in "download-only-034000" cluster
	I0318 06:04:04.436047   11895 cache.go:121] Beginning downloading kic base image for docker with docker
	I0318 06:04:04.457910   11895 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0318 06:04:04.458035   11895 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 06:04:04.458128   11895 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0318 06:04:04.510072   11895 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0318 06:04:04.510106   11895 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0318 06:04:04.510278   11895 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0318 06:04:04.510303   11895 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0318 06:04:04.510310   11895 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0318 06:04:04.510320   11895 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0318 06:04:04.715857   11895 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0318 06:04:04.715889   11895 cache.go:56] Caching tarball of preloaded images
	I0318 06:04:04.716124   11895 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 06:04:04.737859   11895 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 06:04:04.737887   11895 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0318 06:04:05.280300   11895 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/18429-11233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-034000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-034000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (13.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-darwin-amd64 delete --all: (13.990414213s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (13.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-034000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-139000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-139000
--- PASS: TestDownloadOnlyKic (2.00s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-109000 --alsologtostderr --binary-mirror http://127.0.0.1:53693 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-109000 --alsologtostderr --binary-mirror http://127.0.0.1:53693 --driver=docker : (1.046937451s)
helpers_test.go:175: Cleaning up "binary-mirror-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-109000
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-636000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-636000: exit status 85 (201.254251ms)

                                                
                                                
-- stdout --
	* Profile "addons-636000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-636000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-636000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-636000: exit status 85 (222.58611ms)

                                                
                                                
-- stdout --
	* Profile "addons-636000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-636000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (200.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-636000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-636000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m20.980006927s)
--- PASS: TestAddons/Setup (200.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n89tq" [ebf88625-27ee-4f40-801f-9cae8c72a00c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006670233s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-636000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-636000: (5.940964224s)
--- PASS: TestAddons/parallel/InspektorGadget (11.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.143117ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-hkj55" [5fe0a2f9-3906-4cc7-a1b0-9520d113823c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00470001s
addons_test.go:415: (dbg) Run:  kubectl --context addons-636000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-darwin-amd64 -p addons-636000 addons disable metrics-server --alsologtostderr -v=1: (1.406998002s)
--- PASS: TestAddons/parallel/MetricsServer (7.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.288908ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-hvm4j" [d2a11ba4-343f-474d-ad7c-e34299641727] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006106295s
addons_test.go:473: (dbg) Run:  kubectl --context addons-636000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-636000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.139029045s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 19.17559ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-636000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-636000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c8865385-63db-4ce9-b1bf-1c19cf57a415] Pending
helpers_test.go:344: "task-pv-pod" [c8865385-63db-4ce9-b1bf-1c19cf57a415] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c8865385-63db-4ce9-b1bf-1c19cf57a415] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005221933s
addons_test.go:584: (dbg) Run:  kubectl --context addons-636000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-636000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-636000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-636000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-636000 delete pod task-pv-pod: (1.19517464s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-636000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-636000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-636000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f5010241-b3c4-434d-9543-2d2894603f00] Pending
helpers_test.go:344: "task-pv-pod-restore" [f5010241-b3c4-434d-9543-2d2894603f00] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f5010241-b3c4-434d-9543-2d2894603f00] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004391304s
addons_test.go:626: (dbg) Run:  kubectl --context addons-636000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-636000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-636000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-636000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.960136405s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-636000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-636000 --alsologtostderr -v=1: (1.63127267s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-vbf48" [c20e5e49-874f-4e1e-910e-7abf46e23520] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-vbf48" [c20e5e49-874f-4e1e-910e-7abf46e23520] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006195343s
--- PASS: TestAddons/parallel/Headlamp (12.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-zw4lc" [b4d98331-7cc1-4f06-8a40-f0ca4b7572e9] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003533578s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-636000
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-636000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-636000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [75908f0c-87b8-416e-a362-2692944fbebf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [75908f0c-87b8-416e-a362-2692944fbebf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [75908f0c-87b8-416e-a362-2692944fbebf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005428021s
addons_test.go:891: (dbg) Run:  kubectl --context addons-636000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 ssh "cat /opt/local-path-provisioner/pvc-307b105b-9015-40f9-8ec4-f91abf9ff47e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-636000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-636000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-636000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.079039577s)
--- PASS: TestAddons/parallel/LocalPath (54.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ljqwf" [a40d7737-97a1-45b0-a6d3-45e0b68742e5] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005235151s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-636000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-2wns5" [633a6157-6acc-4d9f-8fbf-4ff05612e408] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004871705s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-636000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-636000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-636000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-636000: (10.950674664s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-636000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-636000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-636000
--- PASS: TestAddons/StoppedEnableDisable (11.70s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.14s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.14s)

                                                
                                    
x
+
TestErrorSpam/setup (22.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-920000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-920000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 --driver=docker : (22.405161072s)
--- PASS: TestErrorSpam/setup (22.41s)

                                                
                                    
x
+
TestErrorSpam/start (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 start --dry-run
--- PASS: TestErrorSpam/start (2.13s)

                                                
                                    
x
+
TestErrorSpam/status (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 status
--- PASS: TestErrorSpam/status (1.33s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (2.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 stop: (2.169243464s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-920000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-920000 stop
--- PASS: TestErrorSpam/stop (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18429-11233/.minikube/files/etc/test/nested/copy/11705/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-014000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-014000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m7.643549925s)
--- PASS: TestFunctional/serial/StartWithProxy (67.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-014000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-014000 --alsologtostderr -v=8: (28.940604754s)
functional_test.go:659: soft start took 28.941070499s for "functional-014000" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-014000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 cache add registry.k8s.io/pause:3.1: (3.736346765s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 cache add registry.k8s.io/pause:3.3: (3.588701195s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 cache add registry.k8s.io/pause:latest: (2.579817846s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2536859874/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cache add minikube-local-cache-test:functional-014000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 cache add minikube-local-cache-test:functional-014000: (1.073292638s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cache delete minikube-local-cache-test:functional-014000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-014000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (416.344103ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 cache reload: (2.147160821s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 kubectl -- --context functional-014000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-014000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-014000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0318 06:13:02.999936   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.008759   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.019282   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.040855   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.081010   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.161681   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.323859   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:03.644000   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:04.286281   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:05.566780   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:08.127319   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:13.248044   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:13:23.488504   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-014000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.274533751s)
functional_test.go:757: restart took 38.274680483s for "functional-014000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-014000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 logs: (3.130060775s)
--- PASS: TestFunctional/serial/LogsCmd (3.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2897188832/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2897188832/001/logs.txt: (3.167851647s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-014000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-014000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-014000: exit status 115 (595.069669ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.58.2:30101 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-014000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 config get cpus: exit status 14 (63.793328ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 config get cpus: exit status 14 (64.249809ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-014000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-014000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 14802: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (690.210793ms)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 06:15:04.124068   14725 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:15:04.124747   14725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:15:04.124757   14725 out.go:304] Setting ErrFile to fd 2...
	I0318 06:15:04.124764   14725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:15:04.125425   14725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:15:04.127025   14725 out.go:298] Setting JSON to false
	I0318 06:15:04.150209   14725 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9877,"bootTime":1710757827,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 06:15:04.150328   14725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 06:15:04.172152   14725 out.go:177] * [functional-014000] minikube v1.32.0 on Darwin 14.3.1
	I0318 06:15:04.214124   14725 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 06:15:04.214214   14725 notify.go:220] Checking for updates...
	I0318 06:15:04.257460   14725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 06:15:04.300146   14725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 06:15:04.342048   14725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 06:15:04.363263   14725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 06:15:04.386261   14725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 06:15:04.407895   14725 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 06:15:04.408638   14725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 06:15:04.466815   14725 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 06:15:04.466984   14725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:15:04.574438   14725 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:120 SystemTime:2024-03-18 13:15:04.563264879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:15:04.617668   14725 out.go:177] * Using the docker driver based on existing profile
	I0318 06:15:04.638486   14725 start.go:297] selected driver: docker
	I0318 06:15:04.638514   14725 start.go:901] validating driver "docker" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 06:15:04.638630   14725 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 06:15:04.664874   14725 out.go:177] 
	W0318 06:15:04.686903   14725 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 06:15:04.708439   14725 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-014000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (714.070679ms)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 06:15:03.404587   14707 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:15:03.404757   14707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:15:03.404763   14707 out.go:304] Setting ErrFile to fd 2...
	I0318 06:15:03.404766   14707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:15:03.404973   14707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:15:03.406618   14707 out.go:298] Setting JSON to false
	I0318 06:15:03.430002   14707 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9876,"bootTime":1710757827,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0318 06:15:03.430100   14707 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 06:15:03.452038   14707 out.go:177] * [functional-014000] minikube v1.32.0 sur Darwin 14.3.1
	I0318 06:15:03.494747   14707 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 06:15:03.494775   14707 notify.go:220] Checking for updates...
	I0318 06:15:03.537527   14707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
	I0318 06:15:03.558634   14707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0318 06:15:03.579831   14707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 06:15:03.638569   14707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube
	I0318 06:15:03.697583   14707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 06:15:03.719042   14707 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 06:15:03.719468   14707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 06:15:03.776389   14707 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0318 06:15:03.776568   14707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0318 06:15:03.885473   14707 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:120 SystemTime:2024-03-18 13:15:03.874423737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0318 06:15:03.929122   14707 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0318 06:15:03.951259   14707 start.go:297] selected driver: docker
	I0318 06:15:03.951278   14707 start.go:901] validating driver "docker" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 06:15:03.951382   14707 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 06:15:03.976297   14707 out.go:177] 
	W0318 06:15:03.997254   14707 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 06:15:04.019159   14707 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [69d2d7bf-f296-46db-98b2-2b761e5fda26] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005752906s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-014000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-014000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-014000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-014000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c2f0d389-f9e0-4341-ae5c-7e2bf4fb138e] Pending
helpers_test.go:344: "sp-pod" [c2f0d389-f9e0-4341-ae5c-7e2bf4fb138e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0318 06:14:24.929932   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [c2f0d389-f9e0-4341-ae5c-7e2bf4fb138e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007106116s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-014000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-014000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-014000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ec5ae7d9-9d03-4f76-820a-b02219762bc8] Pending
helpers_test.go:344: "sp-pod" [ec5ae7d9-9d03-4f76-820a-b02219762bc8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ec5ae7d9-9d03-4f76-820a-b02219762bc8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006154815s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-014000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh -n functional-014000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cp functional-014000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd4216459452/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh -n functional-014000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh -n functional-014000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-014000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-rh7qj" [8fa04bd3-262a-484a-ab8d-9a2a9a7f7ecb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-rh7qj" [8fa04bd3-262a-484a-ab8d-9a2a9a7f7ecb] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.005313705s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-014000 exec mysql-859648c796-rh7qj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-014000 exec mysql-859648c796-rh7qj -- mysql -ppassword -e "show databases;": exit status 1 (249.954313ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-014000 exec mysql-859648c796-rh7qj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-014000 exec mysql-859648c796-rh7qj -- mysql -ppassword -e "show databases;": exit status 1 (129.116557ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-014000 exec mysql-859648c796-rh7qj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11705/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /etc/test/nested/copy/11705/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11705.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/11705.pem"
E0318 06:13:43.969144   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11705.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /usr/share/ca-certificates/11705.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/117052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/117052.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/117052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /usr/share/ca-certificates/117052.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-014000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh "sudo systemctl is-active crio": exit status 1 (383.613284ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.53924802s)
--- PASS: TestFunctional/parallel/License (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-014000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-014000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-014000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-014000 image ls --format short --alsologtostderr:
I0318 06:15:21.481026   14840 out.go:291] Setting OutFile to fd 1 ...
I0318 06:15:21.481244   14840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:21.481250   14840 out.go:304] Setting ErrFile to fd 2...
I0318 06:15:21.481254   14840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:21.481591   14840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
I0318 06:15:21.482514   14840 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:21.482616   14840 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:21.483030   14840 cli_runner.go:164] Run: docker container inspect functional-014000 --format={{.State.Status}}
I0318 06:15:21.543011   14840 ssh_runner.go:195] Run: systemctl --version
I0318 06:15:21.543135   14840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014000
I0318 06:15:21.600142   14840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54411 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/functional-014000/id_rsa Username:docker}
I0318 06:15:21.692890   14840 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-014000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/minikube-local-cache-test | functional-014000 | 6705596cd63e3 | 30B    |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| docker.io/library/nginx                     | latest            | 92b11f67642b6 | 187MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/localhost/my-image                | functional-014000 | 2d42de6382ac2 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-014000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-014000 image ls --format table --alsologtostderr:
I0318 06:15:27.594020   14896 out.go:291] Setting OutFile to fd 1 ...
I0318 06:15:27.594344   14896 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:27.594355   14896 out.go:304] Setting ErrFile to fd 2...
I0318 06:15:27.594359   14896 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:27.594567   14896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
I0318 06:15:27.595149   14896 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:27.595237   14896 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:27.595627   14896 cli_runner.go:164] Run: docker container inspect functional-014000 --format={{.State.Status}}
I0318 06:15:27.648023   14896 ssh_runner.go:195] Run: systemctl --version
I0318 06:15:27.648092   14896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014000
I0318 06:15:27.701443   14896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54411 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/functional-014000/id_rsa Username:docker}
I0318 06:15:27.795073   14896 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-014000 image ls --format json --alsologtostderr:
[{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a289930439
8e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-014000"],"size":"32900000"},{"id":"2d42de6382ac24d7b38d6415ec17f3ef8064cd1b00bedec6b9d295689eb1c603","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-014000"],"size":"1240000"},{"id":"6705596cd63e373d6bb5e11bd496d950671f0f2e9f6234c5efa240ea87c8d5bb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-014000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDige
sts":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600
000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-014000 image ls --format json --alsologtostderr:
I0318 06:15:27.280360   14890 out.go:291] Setting OutFile to fd 1 ...
I0318 06:15:27.280537   14890 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:27.280543   14890 out.go:304] Setting ErrFile to fd 2...
I0318 06:15:27.280546   14890 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:27.280729   14890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
I0318 06:15:27.281317   14890 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:27.281409   14890 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:27.281820   14890 cli_runner.go:164] Run: docker container inspect functional-014000 --format={{.State.Status}}
I0318 06:15:27.334371   14890 ssh_runner.go:195] Run: systemctl --version
I0318 06:15:27.334442   14890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014000
I0318 06:15:27.386539   14890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54411 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/functional-014000/id_rsa Username:docker}
I0318 06:15:27.482113   14890 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-014000 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6705596cd63e373d6bb5e11bd496d950671f0f2e9f6234c5efa240ea87c8d5bb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-014000
size: "30"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-014000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-014000 image ls --format yaml --alsologtostderr:
I0318 06:15:21.813101   14852 out.go:291] Setting OutFile to fd 1 ...
I0318 06:15:21.813473   14852 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:21.813479   14852 out.go:304] Setting ErrFile to fd 2...
I0318 06:15:21.813483   14852 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:21.813671   14852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
I0318 06:15:21.815263   14852 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:21.815384   14852 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:21.815792   14852 cli_runner.go:164] Run: docker container inspect functional-014000 --format={{.State.Status}}
I0318 06:15:21.868831   14852 ssh_runner.go:195] Run: systemctl --version
I0318 06:15:21.868906   14852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014000
I0318 06:15:21.923192   14852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54411 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/functional-014000/id_rsa Username:docker}
I0318 06:15:22.016482   14852 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh pgrep buildkitd: exit status 1 (390.039982ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image build -t localhost/my-image:functional-014000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image build -t localhost/my-image:functional-014000 testdata/build --alsologtostderr: (4.439406739s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-014000 image build -t localhost/my-image:functional-014000 testdata/build --alsologtostderr:
I0318 06:15:22.525680   14876 out.go:291] Setting OutFile to fd 1 ...
I0318 06:15:22.526602   14876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:22.526608   14876 out.go:304] Setting ErrFile to fd 2...
I0318 06:15:22.526612   14876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 06:15:22.526780   14876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
I0318 06:15:22.527346   14876 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:22.528062   14876 config.go:182] Loaded profile config "functional-014000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 06:15:22.528510   14876 cli_runner.go:164] Run: docker container inspect functional-014000 --format={{.State.Status}}
I0318 06:15:22.581619   14876 ssh_runner.go:195] Run: systemctl --version
I0318 06:15:22.581691   14876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014000
I0318 06:15:22.634225   14876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54411 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/functional-014000/id_rsa Username:docker}
I0318 06:15:22.728988   14876 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.982108652.tar
I0318 06:15:22.729070   14876 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 06:15:22.745921   14876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.982108652.tar
I0318 06:15:22.750312   14876 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.982108652.tar: stat -c "%s %y" /var/lib/minikube/build/build.982108652.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.982108652.tar': No such file or directory
I0318 06:15:22.750342   14876 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.982108652.tar --> /var/lib/minikube/build/build.982108652.tar (3072 bytes)
I0318 06:15:22.789864   14876 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.982108652
I0318 06:15:22.805541   14876 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.982108652 -xf /var/lib/minikube/build/build.982108652.tar
I0318 06:15:22.820978   14876 docker.go:360] Building image: /var/lib/minikube/build/build.982108652
I0318 06:15:22.821061   14876 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-014000 /var/lib/minikube/build/build.982108652
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:2d42de6382ac24d7b38d6415ec17f3ef8064cd1b00bedec6b9d295689eb1c603 done
#8 naming to localhost/my-image:functional-014000 done
#8 DONE 0.0s
I0318 06:15:26.837836   14876 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-014000 /var/lib/minikube/build/build.982108652: (4.016697923s)
I0318 06:15:26.837901   14876 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.982108652
I0318 06:15:26.856949   14876 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.982108652.tar
I0318 06:15:26.872637   14876 build_images.go:217] Built localhost/my-image:functional-014000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.982108652.tar
I0318 06:15:26.872660   14876 build_images.go:133] succeeded building to: functional-014000
I0318 06:15:26.872664   14876 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.262446481s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-014000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-014000 docker-env) && out/minikube-darwin-amd64 status -p functional-014000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-014000 docker-env) && out/minikube-darwin-amd64 status -p functional-014000": (1.204976645s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-014000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image load --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image load --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr: (4.065616178s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image load --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image load --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr: (2.571346058s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.506595155s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-014000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image load --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image load --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr: (4.251142977s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image save gcr.io/google-containers/addon-resizer:functional-014000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image save gcr.io/google-containers/addon-resizer:functional-014000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.523142418s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image rm gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.347254392s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-014000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 image save --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 image save --daemon gcr.io/google-containers/addon-resizer:functional-014000 --alsologtostderr: (1.227021927s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-014000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-014000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-014000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-014000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14239: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-014000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-014000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-014000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [073d6729-b410-43a3-8879-760302b78284] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [073d6729-b410-43a3-8879-760302b78284] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.005319594s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-014000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-014000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14271: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-014000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-014000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-g9d5w" [576da35d-230a-4081-a90d-f183234a66d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-g9d5w" [576da35d-230a-4081-a90d-f183234a66d5] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005114821s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "481.554305ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "88.976714ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "485.688039ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "89.99767ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3891612621/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710767685594548000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3891612621/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710767685594548000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3891612621/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710767685594548000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3891612621/001/test-1710767685594548000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (414.122276ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 18 13:14 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 18 13:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 18 13:14 test-1710767685594548000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh cat /mount-9p/test-1710767685594548000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-014000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4c00f54b-35e7-4d91-873a-6f9bf348fa30] Pending
helpers_test.go:344: "busybox-mount" [4c00f54b-35e7-4d91-873a-6f9bf348fa30] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4c00f54b-35e7-4d91-873a-6f9bf348fa30] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4c00f54b-35e7-4d91-873a-6f9bf348fa30] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003765291s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-014000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3891612621/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 service list
functional_test.go:1455: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 service list: (1.041697231s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-darwin-amd64 -p functional-014000 service list -o json: (1.061672558s)
functional_test.go:1490: Took "1.061734344s" to run "out/minikube-darwin-amd64 -p functional-014000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 service --namespace=default --https --url hello-node: signal: killed (15.003648024s)

                                                
                                                
-- stdout --
	https://127.0.0.1:54754

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:54754
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port86199046/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (443.663999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port86199046/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh "sudo umount -f /mount-9p": exit status 1 (386.852297ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-014000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port86199046/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4066494238/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4066494238/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4066494238/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 1 (595.714154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-014000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4066494238/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4066494238/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-014000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4066494238/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 service hello-node --url --format={{.IP}}
2024/03/18 06:15:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 service hello-node --url --format={{.IP}}: signal: killed (15.004872298s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-014000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-014000 service hello-node --url: signal: killed (15.004767832s)

                                                
                                                
-- stdout --
	http://127.0.0.1:54857

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:54857
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-014000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-014000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-014000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-770000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0318 06:15:46.853265   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:18:03.003383   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
E0318 06:18:30.695619   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-770000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (3m6.309283879s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
E0318 06:18:47.218658   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:47.223784   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:47.233895   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:47.254402   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:47.294528   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:47.374806   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: (1.209319351s)
--- PASS: TestMultiControlPlane/serial/StartCluster (187.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0318 06:18:47.535811   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- rollout status deployment/busybox
E0318 06:18:47.857199   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:48.498079   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:49.778318   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:18:52.339842   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-770000 -- rollout status deployment/busybox: (7.151285963s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-68zj2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-69nfn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-7nnwg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-68zj2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-69nfn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-7nnwg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-68zj2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-69nfn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-7nnwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0318 06:18:57.460231   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-68zj2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-68zj2 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-69nfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-69nfn -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-7nnwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-770000 -- exec busybox-5b5d89c9d6-7nnwg -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-770000 -v=7 --alsologtostderr
E0318 06:19:07.700669   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-770000 -v=7 --alsologtostderr: (19.524396502s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: (1.507294411s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-770000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.219080187s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (26.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status --output json -v=7 --alsologtostderr: (1.522223036s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp testdata/cp-test.txt ha-770000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1802022194/001/cp-test_ha-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000:/home/docker/cp-test.txt ha-770000-m02:/home/docker/cp-test_ha-770000_ha-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test_ha-770000_ha-770000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000:/home/docker/cp-test.txt ha-770000-m03:/home/docker/cp-test_ha-770000_ha-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test_ha-770000_ha-770000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000:/home/docker/cp-test.txt ha-770000-m04:/home/docker/cp-test_ha-770000_ha-770000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test.txt"
E0318 06:19:28.182071   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test_ha-770000_ha-770000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp testdata/cp-test.txt ha-770000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1802022194/001/cp-test_ha-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m02:/home/docker/cp-test.txt ha-770000:/home/docker/cp-test_ha-770000-m02_ha-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test_ha-770000-m02_ha-770000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m02:/home/docker/cp-test.txt ha-770000-m03:/home/docker/cp-test_ha-770000-m02_ha-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test_ha-770000-m02_ha-770000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m02:/home/docker/cp-test.txt ha-770000-m04:/home/docker/cp-test_ha-770000-m02_ha-770000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test_ha-770000-m02_ha-770000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp testdata/cp-test.txt ha-770000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1802022194/001/cp-test_ha-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m03:/home/docker/cp-test.txt ha-770000:/home/docker/cp-test_ha-770000-m03_ha-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test_ha-770000-m03_ha-770000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m03:/home/docker/cp-test.txt ha-770000-m02:/home/docker/cp-test_ha-770000-m03_ha-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test_ha-770000-m03_ha-770000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m03:/home/docker/cp-test.txt ha-770000-m04:/home/docker/cp-test_ha-770000-m03_ha-770000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test_ha-770000-m03_ha-770000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp testdata/cp-test.txt ha-770000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1802022194/001/cp-test_ha-770000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m04:/home/docker/cp-test.txt ha-770000:/home/docker/cp-test_ha-770000-m04_ha-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000 "sudo cat /home/docker/cp-test_ha-770000-m04_ha-770000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m04:/home/docker/cp-test.txt ha-770000-m02:/home/docker/cp-test_ha-770000-m04_ha-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m02 "sudo cat /home/docker/cp-test_ha-770000-m04_ha-770000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 cp ha-770000-m04:/home/docker/cp-test.txt ha-770000-m03:/home/docker/cp-test_ha-770000-m04_ha-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 ssh -n ha-770000-m03 "sudo cat /home/docker/cp-test_ha-770000-m04_ha-770000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (26.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 node stop m02 -v=7 --alsologtostderr: (10.805975076s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: exit status 7 (1.145065547s)

                                                
                                                
-- stdout --
	ha-770000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 06:19:58.962314   16282 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:19:58.962573   16282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:19:58.962578   16282 out.go:304] Setting ErrFile to fd 2...
	I0318 06:19:58.962582   16282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:19:58.962769   16282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:19:58.962947   16282 out.go:298] Setting JSON to false
	I0318 06:19:58.962969   16282 mustload.go:65] Loading cluster: ha-770000
	I0318 06:19:58.963011   16282 notify.go:220] Checking for updates...
	I0318 06:19:58.963268   16282 config.go:182] Loaded profile config "ha-770000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 06:19:58.963285   16282 status.go:255] checking status of ha-770000 ...
	I0318 06:19:58.963788   16282 cli_runner.go:164] Run: docker container inspect ha-770000 --format={{.State.Status}}
	I0318 06:19:59.017833   16282 status.go:330] ha-770000 host status = "Running" (err=<nil>)
	I0318 06:19:59.017871   16282 host.go:66] Checking if "ha-770000" exists ...
	I0318 06:19:59.018133   16282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770000
	I0318 06:19:59.072427   16282 host.go:66] Checking if "ha-770000" exists ...
	I0318 06:19:59.072728   16282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 06:19:59.072789   16282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770000
	I0318 06:19:59.126947   16282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54869 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/ha-770000/id_rsa Username:docker}
	I0318 06:19:59.219719   16282 ssh_runner.go:195] Run: systemctl --version
	I0318 06:19:59.224665   16282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 06:19:59.242327   16282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-770000
	I0318 06:19:59.296396   16282 kubeconfig.go:125] found "ha-770000" server: "https://127.0.0.1:54868"
	I0318 06:19:59.296426   16282 api_server.go:166] Checking apiserver status ...
	I0318 06:19:59.296467   16282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 06:19:59.315958   16282 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2449/cgroup
	W0318 06:19:59.332128   16282 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2449/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 06:19:59.332189   16282 ssh_runner.go:195] Run: ls
	I0318 06:19:59.337888   16282 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54868/healthz ...
	I0318 06:19:59.343782   16282 api_server.go:279] https://127.0.0.1:54868/healthz returned 200:
	ok
	I0318 06:19:59.343801   16282 status.go:422] ha-770000 apiserver status = Running (err=<nil>)
	I0318 06:19:59.343812   16282 status.go:257] ha-770000 status: &{Name:ha-770000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 06:19:59.343824   16282 status.go:255] checking status of ha-770000-m02 ...
	I0318 06:19:59.344074   16282 cli_runner.go:164] Run: docker container inspect ha-770000-m02 --format={{.State.Status}}
	I0318 06:19:59.396495   16282 status.go:330] ha-770000-m02 host status = "Stopped" (err=<nil>)
	I0318 06:19:59.396520   16282 status.go:343] host is not running, skipping remaining checks
	I0318 06:19:59.396530   16282 status.go:257] ha-770000-m02 status: &{Name:ha-770000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 06:19:59.396543   16282 status.go:255] checking status of ha-770000-m03 ...
	I0318 06:19:59.396819   16282 cli_runner.go:164] Run: docker container inspect ha-770000-m03 --format={{.State.Status}}
	I0318 06:19:59.448999   16282 status.go:330] ha-770000-m03 host status = "Running" (err=<nil>)
	I0318 06:19:59.449032   16282 host.go:66] Checking if "ha-770000-m03" exists ...
	I0318 06:19:59.449274   16282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770000-m03
	I0318 06:19:59.500650   16282 host.go:66] Checking if "ha-770000-m03" exists ...
	I0318 06:19:59.500909   16282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 06:19:59.500966   16282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770000-m03
	I0318 06:19:59.553842   16282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54980 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/ha-770000-m03/id_rsa Username:docker}
	I0318 06:19:59.649261   16282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 06:19:59.666521   16282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-770000
	I0318 06:19:59.720975   16282 kubeconfig.go:125] found "ha-770000" server: "https://127.0.0.1:54868"
	I0318 06:19:59.721001   16282 api_server.go:166] Checking apiserver status ...
	I0318 06:19:59.721047   16282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 06:19:59.740001   16282 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2261/cgroup
	W0318 06:19:59.756634   16282 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2261/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 06:19:59.756705   16282 ssh_runner.go:195] Run: ls
	I0318 06:19:59.761887   16282 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54868/healthz ...
	I0318 06:19:59.766263   16282 api_server.go:279] https://127.0.0.1:54868/healthz returned 200:
	ok
	I0318 06:19:59.766279   16282 status.go:422] ha-770000-m03 apiserver status = Running (err=<nil>)
	I0318 06:19:59.766288   16282 status.go:257] ha-770000-m03 status: &{Name:ha-770000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 06:19:59.766299   16282 status.go:255] checking status of ha-770000-m04 ...
	I0318 06:19:59.766559   16282 cli_runner.go:164] Run: docker container inspect ha-770000-m04 --format={{.State.Status}}
	I0318 06:19:59.819521   16282 status.go:330] ha-770000-m04 host status = "Running" (err=<nil>)
	I0318 06:19:59.819547   16282 host.go:66] Checking if "ha-770000-m04" exists ...
	I0318 06:19:59.819804   16282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770000-m04
	I0318 06:19:59.872319   16282 host.go:66] Checking if "ha-770000-m04" exists ...
	I0318 06:19:59.872561   16282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 06:19:59.872611   16282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770000-m04
	I0318 06:19:59.926859   16282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55103 SSHKeyPath:/Users/jenkins/minikube-integration/18429-11233/.minikube/machines/ha-770000-m04/id_rsa Username:docker}
	I0318 06:20:00.021275   16282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 06:20:00.038723   16282 status.go:257] ha-770000-m04 status: &{Name:ha-770000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 node start m02 -v=7 --alsologtostderr
E0318 06:20:09.142792   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 node start m02 -v=7 --alsologtostderr: (21.016233843s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: (2.10366426s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.249864558s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-770000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-770000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-770000 -v=7 --alsologtostderr: (34.517959369s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-770000 --wait=true -v=7 --alsologtostderr
E0318 06:21:31.064434   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
E0318 06:23:03.006952   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-770000 --wait=true -v=7 --alsologtostderr: (2m36.539180082s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-770000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 node delete m03 -v=7 --alsologtostderr
E0318 06:23:47.222038   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 node delete m03 -v=7 --alsologtostderr: (11.336040519s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: (1.104747979s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 stop -v=7 --alsologtostderr
E0318 06:24:14.906643   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 stop -v=7 --alsologtostderr: (32.959762841s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: exit status 7 (222.971938ms)

                                                
                                                
-- stdout --
	ha-770000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 06:24:23.049859   17073 out.go:291] Setting OutFile to fd 1 ...
	I0318 06:24:23.050614   17073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:24:23.050623   17073 out.go:304] Setting ErrFile to fd 2...
	I0318 06:24:23.050629   17073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 06:24:23.051255   17073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-11233/.minikube/bin
	I0318 06:24:23.051467   17073 out.go:298] Setting JSON to false
	I0318 06:24:23.051495   17073 mustload.go:65] Loading cluster: ha-770000
	I0318 06:24:23.051531   17073 notify.go:220] Checking for updates...
	I0318 06:24:23.051774   17073 config.go:182] Loaded profile config "ha-770000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 06:24:23.051790   17073 status.go:255] checking status of ha-770000 ...
	I0318 06:24:23.052162   17073 cli_runner.go:164] Run: docker container inspect ha-770000 --format={{.State.Status}}
	I0318 06:24:23.104814   17073 status.go:330] ha-770000 host status = "Stopped" (err=<nil>)
	I0318 06:24:23.104850   17073 status.go:343] host is not running, skipping remaining checks
	I0318 06:24:23.104866   17073 status.go:257] ha-770000 status: &{Name:ha-770000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 06:24:23.104904   17073 status.go:255] checking status of ha-770000-m02 ...
	I0318 06:24:23.105201   17073 cli_runner.go:164] Run: docker container inspect ha-770000-m02 --format={{.State.Status}}
	I0318 06:24:23.157539   17073 status.go:330] ha-770000-m02 host status = "Stopped" (err=<nil>)
	I0318 06:24:23.157566   17073 status.go:343] host is not running, skipping remaining checks
	I0318 06:24:23.157575   17073 status.go:257] ha-770000-m02 status: &{Name:ha-770000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 06:24:23.157595   17073 status.go:255] checking status of ha-770000-m04 ...
	I0318 06:24:23.157850   17073 cli_runner.go:164] Run: docker container inspect ha-770000-m04 --format={{.State.Status}}
	I0318 06:24:23.208779   17073 status.go:330] ha-770000-m04 host status = "Stopped" (err=<nil>)
	I0318 06:24:23.208823   17073 status.go:343] host is not running, skipping remaining checks
	I0318 06:24:23.208833   17073 status.go:257] ha-770000-m04 status: &{Name:ha-770000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-770000 --wait=true -v=7 --alsologtostderr --driver=docker 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-770000 --wait=true -v=7 --alsologtostderr --driver=docker : (59.788708026s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: (1.169352259s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-770000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-770000 --control-plane -v=7 --alsologtostderr: (42.402509505s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-770000 status -v=7 --alsologtostderr: (1.522824964s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.218660594s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.22s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-704000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-704000 --driver=docker : (23.237196344s)
--- PASS: TestImageBuild/serial/Setup (23.24s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-704000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-704000: (4.815618593s)
--- PASS: TestImageBuild/serial/NormalBuild (4.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-704000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-704000: (1.231719418s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-704000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-704000: (1.067026157s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-704000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-704000: (1.054700677s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-701000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-701000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (37.484943003s)
--- PASS: TestJSONOutput/start/Command (37.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-701000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-701000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-701000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-701000 --output=json --user=testUser: (5.76679517s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-929000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-929000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (390.913208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71ae2efc-67ef-4f62-ae92-b83e28ac0c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-929000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"46e2b849-edb8-4886-969e-ac20d6901595","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"4fecdffc-e4dd-4712-a951-22a6f36edd12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig"}}
	{"specversion":"1.0","id":"2701d71a-944e-4364-b567-649a4487bc71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"8cad5c37-b1c7-48a3-bd8f-2e07da539371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"47d63f8e-c55c-44d9-8861-6d3c52523f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-11233/.minikube"}}
	{"specversion":"1.0","id":"0930d1e9-d82c-413e-940a-cbeada388a96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33185a2b-04c5-4a6b-93b7-7f2d2276cc67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-929000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-929000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-051000 --network=
E0318 06:28:03.010409   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-051000 --network=: (22.989394571s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-051000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-051000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-051000: (2.453178609s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-352000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-352000 --network=bridge: (22.74509733s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-352000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-352000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-352000: (2.279551412s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.08s)

                                                
                                    
x
+
TestKicExistingNetwork (25.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-013000 --network=existing-network
E0318 06:28:47.225942   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-013000 --network=existing-network: (22.797694621s)
helpers_test.go:175: Cleaning up "existing-network-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-013000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-013000: (2.259001732s)
--- PASS: TestKicExistingNetwork (25.51s)

                                                
                                    
x
+
TestKicCustomSubnet (25.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-211000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-211000 --subnet=192.168.60.0/24: (23.17934448s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-211000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-211000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-211000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-211000: (2.410159731s)
--- PASS: TestKicCustomSubnet (25.64s)

                                                
                                    
x
+
TestKicStaticIP (24.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-149000 --static-ip=192.168.200.200
E0318 06:29:26.064101   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-149000 --static-ip=192.168.200.200: (22.152657526s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-149000 ip
helpers_test.go:175: Cleaning up "static-ip-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-149000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-149000: (2.457262621s)
--- PASS: TestKicStaticIP (24.87s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-632000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-632000 --driver=docker : (23.065297184s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-635000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-635000 --driver=docker : (23.13494757s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-632000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-635000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-635000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-635000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-635000: (2.274094954s)
helpers_test.go:175: Cleaning up "first-632000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-632000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-632000: (2.512711951s)
--- PASS: TestMinikubeProfile (53.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-776000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-776000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.167907675s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-776000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-789000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-789000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.400913129s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.40s)

                                                
                                    
x
+
TestPreload (131.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-947000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0318 07:18:03.073674   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/addons-636000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-947000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m18.362779679s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-947000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-947000 image pull gcr.io/k8s-minikube/busybox: (5.464191166s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-947000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-947000: (10.836600161s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-947000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0318 07:18:47.284362   11705 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18429-11233/.minikube/profiles/functional-014000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-947000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (34.27977213s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-947000 image list
helpers_test.go:175: Cleaning up "test-preload-947000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-947000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-947000: (2.509792426s)
--- PASS: TestPreload (131.81s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (19.91s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18429
- KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3200301216/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3200301216/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3200301216/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3200301216/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (19.91s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.64s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18429
- KUBECONFIG=/Users/jenkins/minikube-integration/18429-11233/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1120389274/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1120389274/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1120389274/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1120389274/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.64s)

                                                
                                    

Test skip (19/211)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 17.872605ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vqv5g" [3d1af298-f70e-4b27-8633-985702924620] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00585207s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gqmfz" [ef551161-7148-46bb-b7b0-469c6b0f1bcc] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003808446s
addons_test.go:340: (dbg) Run:  kubectl --context addons-636000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-636000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-636000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.673982026s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (19.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-636000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-636000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-636000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ec25a4bd-ea70-49db-a0f7-2856853b15d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ec25a4bd-ea70-49db-a0f7-2856853b15d1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004773396s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-636000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.87s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-014000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-014000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-6fthm" [fd478212-27b4-4c6c-af35-971a256362f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-6fthm" [fd478212-27b4-4c6c-af35-971a256362f6] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.00342194s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (15.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard