Test Report: Docker_macOS 17806

                    
                      6a77cc45d797583e591eb70dcaadaee18502387b:2023-12-16:32310
                    
                

Test fail (26/191)

x
+
TestOffline (757.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m36.601183291s)

                                                
                                                
-- stdout --
	* [offline-docker-716000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-716000 in cluster offline-docker-716000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-716000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 15:10:41.588276   27675 out.go:296] Setting OutFile to fd 1 ...
	I1216 15:10:41.588559   27675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:10:41.588564   27675 out.go:309] Setting ErrFile to fd 2...
	I1216 15:10:41.588569   27675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:10:41.588752   27675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 15:10:41.590241   27675 out.go:303] Setting JSON to false
	I1216 15:10:41.613846   27675 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11410,"bootTime":1702756831,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 15:10:41.613945   27675 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 15:10:41.635900   27675 out.go:177] * [offline-docker-716000] minikube v1.32.0 on Darwin 14.2
	I1216 15:10:41.679672   27675 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 15:10:41.679750   27675 notify.go:220] Checking for updates...
	I1216 15:10:41.701720   27675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 15:10:41.727474   27675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 15:10:41.748298   27675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 15:10:41.769542   27675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 15:10:41.790435   27675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 15:10:41.811393   27675 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 15:10:41.868421   27675 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 15:10:41.868590   27675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:10:41.995699   27675 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-12-16 23:10:41.9366438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:10:42.077698   27675 out.go:177] * Using the docker driver based on user configuration
	I1216 15:10:42.119705   27675 start.go:298] selected driver: docker
	I1216 15:10:42.119719   27675 start.go:902] validating driver "docker" against <nil>
	I1216 15:10:42.119729   27675 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 15:10:42.122752   27675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:10:42.252564   27675 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-12-16 23:10:42.213594964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:10:42.252745   27675 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 15:10:42.252923   27675 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 15:10:42.273908   27675 out.go:177] * Using Docker Desktop driver with root privileges
	I1216 15:10:42.294983   27675 cni.go:84] Creating CNI manager for ""
	I1216 15:10:42.295005   27675 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 15:10:42.295015   27675 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 15:10:42.295028   27675 start_flags.go:323] config:
	{Name:offline-docker-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 15:10:42.337130   27675 out.go:177] * Starting control plane node offline-docker-716000 in cluster offline-docker-716000
	I1216 15:10:42.357956   27675 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 15:10:42.400292   27675 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 15:10:42.442295   27675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:10:42.442365   27675 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 15:10:42.442407   27675 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 15:10:42.442425   27675 cache.go:56] Caching tarball of preloaded images
	I1216 15:10:42.442725   27675 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 15:10:42.442748   27675 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 15:10:42.444290   27675 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/offline-docker-716000/config.json ...
	I1216 15:10:42.444550   27675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/offline-docker-716000/config.json: {Name:mk117cfa934ab7cd88c57f551ca8afd90305c570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 15:10:42.502038   27675 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 15:10:42.502062   27675 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 15:10:42.502093   27675 cache.go:194] Successfully downloaded all kic artifacts
	I1216 15:10:42.502153   27675 start.go:365] acquiring machines lock for offline-docker-716000: {Name:mkc4b9ff93d80510a7706e7f3ed78028d4d66da5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:10:42.502349   27675 start.go:369] acquired machines lock for "offline-docker-716000" in 180.91µs
	I1216 15:10:42.502380   27675 start.go:93] Provisioning new machine with config: &{Name:offline-docker-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-716000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 15:10:42.502509   27675 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:10:42.540344   27675 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:10:42.540708   27675 start.go:159] libmachine.API.Create for "offline-docker-716000" (driver="docker")
	I1216 15:10:42.540767   27675 client.go:168] LocalClient.Create starting
	I1216 15:10:42.540983   27675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:10:42.541087   27675 main.go:141] libmachine: Decoding PEM data...
	I1216 15:10:42.541118   27675 main.go:141] libmachine: Parsing certificate...
	I1216 15:10:42.541255   27675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:10:42.541317   27675 main.go:141] libmachine: Decoding PEM data...
	I1216 15:10:42.541336   27675 main.go:141] libmachine: Parsing certificate...
	I1216 15:10:42.561791   27675 cli_runner.go:164] Run: docker network inspect offline-docker-716000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:10:42.698683   27675 cli_runner.go:211] docker network inspect offline-docker-716000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:10:42.698820   27675 network_create.go:281] running [docker network inspect offline-docker-716000] to gather additional debugging logs...
	I1216 15:10:42.698847   27675 cli_runner.go:164] Run: docker network inspect offline-docker-716000
	W1216 15:10:42.752200   27675 cli_runner.go:211] docker network inspect offline-docker-716000 returned with exit code 1
	I1216 15:10:42.752237   27675 network_create.go:284] error running [docker network inspect offline-docker-716000]: docker network inspect offline-docker-716000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-716000 not found
	I1216 15:10:42.752252   27675 network_create.go:286] output of [docker network inspect offline-docker-716000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-716000 not found
	
	** /stderr **
	I1216 15:10:42.752392   27675 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:10:42.848206   27675 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:10:42.848631   27675 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021c2e60}
	I1216 15:10:42.848647   27675 network_create.go:124] attempt to create docker network offline-docker-716000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1216 15:10:42.848719   27675 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-716000 offline-docker-716000
	I1216 15:10:42.938312   27675 network_create.go:108] docker network offline-docker-716000 192.168.58.0/24 created
	I1216 15:10:42.938364   27675 kic.go:121] calculated static IP "192.168.58.2" for the "offline-docker-716000" container
	I1216 15:10:42.938493   27675 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:10:42.992409   27675 cli_runner.go:164] Run: docker volume create offline-docker-716000 --label name.minikube.sigs.k8s.io=offline-docker-716000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:10:43.046400   27675 oci.go:103] Successfully created a docker volume offline-docker-716000
	I1216 15:10:43.046525   27675 cli_runner.go:164] Run: docker run --rm --name offline-docker-716000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-716000 --entrypoint /usr/bin/test -v offline-docker-716000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:10:43.706479   27675 oci.go:107] Successfully prepared a docker volume offline-docker-716000
	I1216 15:10:43.706525   27675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:10:43.706539   27675 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:10:43.706662   27675 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-716000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:16:42.559175   27675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:16:42.559253   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:42.610983   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:16:42.611103   27675 retry.go:31] will retry after 176.737147ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:42.788608   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:42.841313   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:16:42.841425   27675 retry.go:31] will retry after 503.523856ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:43.346687   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:43.398243   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:16:43.398349   27675 retry.go:31] will retry after 776.833179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:44.175759   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:44.230377   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	W1216 15:16:44.230500   27675 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	
	W1216 15:16:44.230521   27675 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:44.230585   27675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:16:44.230651   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:44.280789   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:16:44.280886   27675 retry.go:31] will retry after 268.615297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:44.550447   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:44.604382   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:16:44.604474   27675 retry.go:31] will retry after 376.983356ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:44.983874   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:45.046117   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:16:45.046220   27675 retry.go:31] will retry after 298.874308ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:45.347434   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:16:45.401399   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	W1216 15:16:45.401511   27675 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	
	W1216 15:16:45.401530   27675 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:45.401546   27675 start.go:128] duration metric: createHost completed in 6m2.880907252s
	I1216 15:16:45.401552   27675 start.go:83] releasing machines lock for "offline-docker-716000", held for 6m2.881080877s
	W1216 15:16:45.401563   27675 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1216 15:16:45.402001   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:45.507758   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:45.507819   27675 delete.go:82] Unable to get host status for offline-docker-716000, assuming it has already been deleted: state: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	W1216 15:16:45.507889   27675 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1216 15:16:45.507898   27675 start.go:709] Will try again in 5 seconds ...
	I1216 15:16:50.508756   27675 start.go:365] acquiring machines lock for offline-docker-716000: {Name:mkc4b9ff93d80510a7706e7f3ed78028d4d66da5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:16:50.508906   27675 start.go:369] acquired machines lock for "offline-docker-716000" in 112.911µs
	I1216 15:16:50.508935   27675 start.go:96] Skipping create...Using existing machine configuration
	I1216 15:16:50.508947   27675 fix.go:54] fixHost starting: 
	I1216 15:16:50.509301   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:50.561105   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:50.561149   27675 fix.go:102] recreateIfNeeded on offline-docker-716000: state= err=unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:50.561170   27675 fix.go:107] machineExists: false. err=machine does not exist
	I1216 15:16:50.582625   27675 out.go:177] * docker "offline-docker-716000" container is missing, will recreate.
	I1216 15:16:50.604456   27675 delete.go:124] DEMOLISHING offline-docker-716000 ...
	I1216 15:16:50.604648   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:50.656311   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	W1216 15:16:50.656374   27675 stop.go:75] unable to get state: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:50.656393   27675 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:50.656781   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:50.706659   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:50.706728   27675 delete.go:82] Unable to get host status for offline-docker-716000, assuming it has already been deleted: state: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:50.706821   27675 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-716000
	W1216 15:16:50.757469   27675 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-716000 returned with exit code 1
	I1216 15:16:50.757522   27675 kic.go:371] could not find the container offline-docker-716000 to remove it. will try anyways
	I1216 15:16:50.757612   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:50.808039   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	W1216 15:16:50.808090   27675 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:50.808188   27675 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-716000 /bin/bash -c "sudo init 0"
	W1216 15:16:50.858644   27675 cli_runner.go:211] docker exec --privileged -t offline-docker-716000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 15:16:50.858689   27675 oci.go:650] error shutdown offline-docker-716000: docker exec --privileged -t offline-docker-716000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:51.859744   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:51.914191   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:51.914254   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:51.914279   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:16:51.914302   27675 retry.go:31] will retry after 716.136265ms: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:52.630838   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:52.684151   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:52.684202   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:52.684211   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:16:52.684235   27675 retry.go:31] will retry after 466.984061ms: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:53.152992   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:53.206308   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:53.206359   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:53.206373   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:16:53.206393   27675 retry.go:31] will retry after 1.577066937s: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:54.784740   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:54.839236   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:54.839279   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:54.839290   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:16:54.839316   27675 retry.go:31] will retry after 1.517559367s: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:56.359253   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:56.414187   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:56.414234   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:56.414247   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:16:56.414271   27675 retry.go:31] will retry after 2.076766681s: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:58.491296   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:16:58.543920   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:16:58.543967   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:16:58.543977   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:16:58.544001   27675 retry.go:31] will retry after 2.848680448s: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:17:01.393140   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:17:01.445967   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:01.446017   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:17:01.446031   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:17:01.446054   27675 retry.go:31] will retry after 4.269256746s: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:17:05.715667   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:17:05.770085   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:05.770145   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:17:05.770155   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:17:05.770181   27675 retry.go:31] will retry after 4.935106045s: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:17:10.705663   27675 cli_runner.go:164] Run: docker container inspect offline-docker-716000 --format={{.State.Status}}
	W1216 15:17:10.758736   27675 cli_runner.go:211] docker container inspect offline-docker-716000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:10.758784   27675 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:17:10.758794   27675 oci.go:664] temporary error: container offline-docker-716000 status is  but expect it to be exited
	I1216 15:17:10.758827   27675 oci.go:88] couldn't shut down offline-docker-716000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	 
	I1216 15:17:10.758911   27675 cli_runner.go:164] Run: docker rm -f -v offline-docker-716000
	I1216 15:17:10.809827   27675 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-716000
	W1216 15:17:10.860509   27675 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-716000 returned with exit code 1
	I1216 15:17:10.860623   27675 cli_runner.go:164] Run: docker network inspect offline-docker-716000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:17:10.912060   27675 cli_runner.go:164] Run: docker network rm offline-docker-716000
	I1216 15:17:11.022095   27675 fix.go:114] Sleeping 1 second for extra luck!
	I1216 15:17:12.022772   27675 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:17:12.044637   27675 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:17:12.044748   27675 start.go:159] libmachine.API.Create for "offline-docker-716000" (driver="docker")
	I1216 15:17:12.044766   27675 client.go:168] LocalClient.Create starting
	I1216 15:17:12.044876   27675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:17:12.044931   27675 main.go:141] libmachine: Decoding PEM data...
	I1216 15:17:12.044945   27675 main.go:141] libmachine: Parsing certificate...
	I1216 15:17:12.044989   27675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:17:12.045024   27675 main.go:141] libmachine: Decoding PEM data...
	I1216 15:17:12.045032   27675 main.go:141] libmachine: Parsing certificate...
	I1216 15:17:12.045400   27675 cli_runner.go:164] Run: docker network inspect offline-docker-716000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:17:12.097803   27675 cli_runner.go:211] docker network inspect offline-docker-716000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:17:12.097917   27675 network_create.go:281] running [docker network inspect offline-docker-716000] to gather additional debugging logs...
	I1216 15:17:12.097934   27675 cli_runner.go:164] Run: docker network inspect offline-docker-716000
	W1216 15:17:12.148663   27675 cli_runner.go:211] docker network inspect offline-docker-716000 returned with exit code 1
	I1216 15:17:12.148697   27675 network_create.go:284] error running [docker network inspect offline-docker-716000]: docker network inspect offline-docker-716000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-716000 not found
	I1216 15:17:12.148715   27675 network_create.go:286] output of [docker network inspect offline-docker-716000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-716000 not found
	
	** /stderr **
	I1216 15:17:12.148869   27675 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:17:12.201362   27675 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:12.202824   27675 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:12.204431   27675 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:12.205969   27675 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:12.206379   27675 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002385b20}
	I1216 15:17:12.206392   27675 network_create.go:124] attempt to create docker network offline-docker-716000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1216 15:17:12.206464   27675 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-716000 offline-docker-716000
	I1216 15:17:12.293183   27675 network_create.go:108] docker network offline-docker-716000 192.168.85.0/24 created
	I1216 15:17:12.293224   27675 kic.go:121] calculated static IP "192.168.85.2" for the "offline-docker-716000" container
	I1216 15:17:12.293339   27675 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:17:12.345664   27675 cli_runner.go:164] Run: docker volume create offline-docker-716000 --label name.minikube.sigs.k8s.io=offline-docker-716000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:17:12.396056   27675 oci.go:103] Successfully created a docker volume offline-docker-716000
	I1216 15:17:12.396174   27675 cli_runner.go:164] Run: docker run --rm --name offline-docker-716000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-716000 --entrypoint /usr/bin/test -v offline-docker-716000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:17:12.697732   27675 oci.go:107] Successfully prepared a docker volume offline-docker-716000
	I1216 15:17:12.697768   27675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:17:12.697780   27675 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:17:12.697882   27675 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-716000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:23:12.049292   27675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:23:12.049419   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:12.106984   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:12.107105   27675 retry.go:31] will retry after 308.763639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:12.416434   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:12.468860   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:12.468988   27675 retry.go:31] will retry after 338.485135ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:12.808669   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:12.861064   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:12.861164   27675 retry.go:31] will retry after 794.610124ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:13.657015   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:13.712477   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	W1216 15:23:13.712595   27675 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	
	W1216 15:23:13.712614   27675 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:13.712679   27675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:23:13.712736   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:13.762482   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:13.762597   27675 retry.go:31] will retry after 261.752419ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:14.026665   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:14.079809   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:14.079918   27675 retry.go:31] will retry after 364.05413ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:14.446272   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:14.497968   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:14.498064   27675 retry.go:31] will retry after 572.268427ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:15.070694   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:15.125360   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	W1216 15:23:15.125474   27675 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	
	W1216 15:23:15.125490   27675 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:15.125496   27675 start.go:128] duration metric: createHost completed in 6m3.09842004s
	I1216 15:23:15.125566   27675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:23:15.125637   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:15.176136   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:15.176260   27675 retry.go:31] will retry after 219.976298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:15.397778   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:15.509900   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:15.509991   27675 retry.go:31] will retry after 560.407626ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:16.070756   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:16.124713   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:16.124814   27675 retry.go:31] will retry after 641.905781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:16.768211   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:16.822546   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	W1216 15:23:16.822650   27675 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	
	W1216 15:23:16.822666   27675 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:16.822740   27675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:23:16.822799   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:16.873274   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:16.873369   27675 retry.go:31] will retry after 141.010304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:17.016750   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:17.070318   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:17.070414   27675 retry.go:31] will retry after 390.316143ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:17.461888   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:17.516153   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	I1216 15:23:17.516255   27675 retry.go:31] will retry after 453.444045ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:17.972136   27675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000
	W1216 15:23:18.026228   27675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000 returned with exit code 1
	W1216 15:23:18.026333   27675 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	
	W1216 15:23:18.026356   27675 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-716000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-716000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000
	I1216 15:23:18.026371   27675 fix.go:56] fixHost completed within 6m27.512865517s
	I1216 15:23:18.026378   27675 start.go:83] releasing machines lock for "offline-docker-716000", held for 6m27.512900394s
	W1216 15:23:18.026456   27675 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-716000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-716000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 15:23:18.048162   27675 out.go:177] 
	W1216 15:23:18.069864   27675 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 15:23:18.069925   27675 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1216 15:23:18.069959   27675 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1216 15:23:18.112731   27675 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-12-16 15:23:18.169202 -0800 PST m=+6136.852487771
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-716000
helpers_test.go:235: (dbg) docker inspect offline-docker-716000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-716000",
	        "Id": "7146fb42731987cb461fb114be1561ada0f21d40c9139b7e6e1890fcc547b1ec",
	        "Created": "2023-12-16T23:17:12.253069304Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-716000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-716000 -n offline-docker-716000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-716000 -n offline-docker-716000: exit status 7 (107.50309ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 15:23:18.329906   28501 status.go:249] status error: host: state: unknown state "offline-docker-716000": docker container inspect offline-docker-716000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-716000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-716000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-716000
--- FAIL: TestOffline (757.50s)

                                                
                                    
x
+
TestCertOptions (7200.815s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-595000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1216 15:36:56.752720   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:37:10.651574   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 15:37:27.592921   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (5m10s)
	TestCertOptions (4m32s)
	TestNetworkPlugins (30m20s)
	TestNetworkPlugins/group (30m20s)

                                                
                                                
goroutine 2133 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0005a2680, 0xc0006c1b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc000748280?, {0x5274b80, 0x2a, 0x2a}, {0x10b0145?, 0xc000068180?, 0x5296380?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc000748280)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000134380)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 882 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 881
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 70 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 69
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1225 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc0023ee7e0)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1238
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 566 [syscall, 4 minutes]:
syscall.syscall6(0x1010585?, 0xc000c658f8?, 0xc000c657e8?, 0xc000c65918?, 0x100c000c658e0?, 0x1000000000003?, 0x4ca57890?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000c65890?, 0x1010905?, 0x90?, 0x305a380?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0009507e0?, 0xc000c658c4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000699950)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0027ca580)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000702340?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000702340, 0xc0027ca580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertOptions(0xc000702340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x40e
testing.tRunner(0xc000702340, 0x3b3c778)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1832 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000007380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000007380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000007380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000007380, 0xc000210280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 567 [syscall, 5 minutes]:
syscall.syscall6(0x1010585?, 0xc0009a3a98?, 0xc0009a3988?, 0xc0009a3ab8?, 0x100c0009a3a80?, 0x1000000000003?, 0x4c77eb18?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0009a3a30?, 0x1010905?, 0x90?, 0x305a380?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0026c89a0?, 0xc0009a3a64, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002a4e2d0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0023202c0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000702680?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000702680, 0xc0023202c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertExpiration(0xc000702680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2d7
testing.tRunner(0xc000702680, 0x3b3c770)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1835 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00229c1a0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00229c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00229c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00229c1a0, 0xc000210400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1830 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000683d40)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000683d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000683d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000683d40, 0xc000210000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1201 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023eab00, 0xc00234ed80)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 753
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1837 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00229c680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00229c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00229c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00229c680, 0xc000210580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 864 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0029eb850, 0x2b)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f87de0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022d4780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029eb880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c36910?, {0x3f8c300, 0xc000c57fb0}, 0x1, 0xc0004de0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004df500?, 0x3b9aca00, 0x0, 0xd0?, 0x104475c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bd65?, 0xc00217f080?, 0xc0022c2240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 168 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000934fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 169 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b74a40, 0xc0004de0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1209 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023eaf20, 0xc00234f140)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1208
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 172 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000b74a10, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f87de0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000934ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b74a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f8c300, 0xc00066dcb0}, 0x1, 0xc0004de0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc00050bfd0?, 0x15e8c85?, 0xc000934fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 173 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faee38, 0xc0004de0c0}, 0xc000088f50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3faee38, 0xc0004de0c0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faee38?, 0xc0004de0c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 174 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1745 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025901a0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025901a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0025901a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0025901a0, 0x3b3c860)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2122 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4ca536c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002432b40?, 0xc0006c3000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002432b40, {0xc0006c3000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00008e5b0, {0xc0006c3000?, 0xc002468668?, 0xc002468668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028e44e0, {0x3f8ae00, 0xc00008e5b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8ae80, 0xc0028e44e0}, {0x3f8ae00, 0xc00008e5b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002a8b1e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 567
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1744 [chan receive, 30 minutes]:
testing.(*T).Run(0xc002590000, {0x30ec0e1?, 0x9691e66d804?}, 0xc0026da030)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002590000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002590000, 0x3b3c858)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2147 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4ca538b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002838b40?, 0xc000be0283?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002838b40, {0xc000be0283, 0x57d, 0x57d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000c362f0, {0xc000be0283?, 0xc000c677a0?, 0xc002346e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002714930, {0x3f8ae00, 0xc000c362f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8ae80, 0xc002714930}, {0x3f8ae00, 0xc000c362f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00270c7e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 566
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 642 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x4ca53b98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000b9c300?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000b9c300)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000b9c300)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc002a8aae0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc002a8aae0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc00058c1e0, {0x3fa25c0, 0xc002a8aae0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc00058c1e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0005a3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 639
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 1838 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00229c820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00229c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00229c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00229c820, 0xc000210600)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1807 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002590b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002590b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc00282c2a0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc002590b60, 0x3b3c8a0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2148 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4ca53aa0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002838c00?, 0xc0006c3600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002838c00, {0xc0006c3600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000c36378, {0xc0006c3600?, 0xc00246b668?, 0xc00246b668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002714960, {0x3f8ae00, 0xc000c36378})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8ae80, 0xc002714960}, {0x3f8ae00, 0xc000c36378}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00270c5a0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 566
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1025 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027b7340, 0xc0027b8600)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1008
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1819 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002591040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002591040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002591040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:228 +0x39
testing.tRunner(0xc002591040, 0x3b3c828)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1820 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002234000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002234000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc002234000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:305 +0xb4
testing.tRunner(0xc002234000, 0x3b3c840)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2121 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4d37b5a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002432a80?, 0xc000c1d28c?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002432a80, {0xc000c1d28c, 0x574, 0x574})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00008e558, {0xc000c1d28c?, 0xc0022d4d20?, 0xc002468e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028e44b0, {0x3f8ae00, 0xc00008e558})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8ae80, 0xc0028e44b0}, {0x3f8ae00, 0xc00008e558}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0027e22a0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 567
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 2149 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027ca580, 0xc00270c8a0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 566
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1834 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00229c000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00229c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00229c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00229c000, 0xc000210380)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 881 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faee38, 0xc0004de0c0}, 0xc002345f50, 0xc0022d4058?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3faee38, 0xc0004de0c0}, 0x1?, 0x1?, 0xc002345fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faee38?, 0xc0004de0c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002345fd0?, 0x117bdc7?, 0xc00057c700?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 874 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029eb880, 0xc0004de0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 766
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 873 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022d48a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 766
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1817 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002590820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002590820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002590820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc002590820, 0x3b3c880)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1829 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000683a00, 0xc0026da030)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1744
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1831 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0007024e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0007024e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007024e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0007024e0, 0xc000210200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1818 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002590ea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002590ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc002590ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:156 +0x86
testing.tRunner(0xc002590ea0, 0x3b3c8a8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1833 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000007a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000007a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000007a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000007a00, 0xc000210300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1226 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc0023ee7e0)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1238
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 1746 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002590680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002590680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002590680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002590680, 0x3b3c870)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1836 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00099f7c0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00229c340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00229c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00229c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00229c340, 0xc000210500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1829
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1175 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023ea420, 0xc00234e6c0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1174
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2123 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023202c0, 0xc0027e2360)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 567
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                    
x
+
TestDockerFlags (758.74s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1216 15:26:56.892806   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:27:27.731868   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 15:31:39.950009   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:31:56.896405   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:32:27.735313   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.432097363s)

                                                
                                                
-- stdout --
	* [docker-flags-830000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-830000 in cluster docker-flags-830000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-830000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 15:23:50.937347   28648 out.go:296] Setting OutFile to fd 1 ...
	I1216 15:23:50.937564   28648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:23:50.937569   28648 out.go:309] Setting ErrFile to fd 2...
	I1216 15:23:50.937573   28648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:23:50.937781   28648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 15:23:50.939333   28648 out.go:303] Setting JSON to false
	I1216 15:23:50.961867   28648 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":12199,"bootTime":1702756831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 15:23:50.961960   28648 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 15:23:50.983824   28648 out.go:177] * [docker-flags-830000] minikube v1.32.0 on Darwin 14.2
	I1216 15:23:51.027514   28648 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 15:23:51.027564   28648 notify.go:220] Checking for updates...
	I1216 15:23:51.070247   28648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 15:23:51.091306   28648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 15:23:51.133344   28648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 15:23:51.175399   28648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 15:23:51.196468   28648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 15:23:51.219413   28648 config.go:182] Loaded profile config "force-systemd-flag-603000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 15:23:51.219592   28648 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 15:23:51.276242   28648 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 15:23:51.276404   28648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:23:51.377293   28648 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-12-16 23:23:51.367089203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:23:51.398991   28648 out.go:177] * Using the docker driver based on user configuration
	I1216 15:23:51.420833   28648 start.go:298] selected driver: docker
	I1216 15:23:51.420856   28648 start.go:902] validating driver "docker" against <nil>
	I1216 15:23:51.420872   28648 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 15:23:51.424681   28648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:23:51.523131   28648 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-12-16 23:23:51.513159203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:23:51.523296   28648 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 15:23:51.523481   28648 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1216 15:23:51.544991   28648 out.go:177] * Using Docker Desktop driver with root privileges
	I1216 15:23:51.566939   28648 cni.go:84] Creating CNI manager for ""
	I1216 15:23:51.566981   28648 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 15:23:51.566998   28648 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 15:23:51.567013   28648 start_flags.go:323] config:
	{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1216 15:23:51.588852   28648 out.go:177] * Starting control plane node docker-flags-830000 in cluster docker-flags-830000
	I1216 15:23:51.630817   28648 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 15:23:51.652677   28648 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 15:23:51.694721   28648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:23:51.694796   28648 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 15:23:51.694804   28648 cache.go:56] Caching tarball of preloaded images
	I1216 15:23:51.694810   28648 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 15:23:51.694967   28648 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 15:23:51.694978   28648 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 15:23:51.695049   28648 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/docker-flags-830000/config.json ...
	I1216 15:23:51.695099   28648 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/docker-flags-830000/config.json: {Name:mk2f87c4195bfb623ba068dc404ab1f91b2e8c8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 15:23:51.746586   28648 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 15:23:51.746615   28648 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 15:23:51.746648   28648 cache.go:194] Successfully downloaded all kic artifacts
	I1216 15:23:51.746701   28648 start.go:365] acquiring machines lock for docker-flags-830000: {Name:mka564d65e5961aac9fb315fa66a3cb6eebb2df6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:23:51.746859   28648 start.go:369] acquired machines lock for "docker-flags-830000" in 144.108µs
	I1216 15:23:51.746884   28648 start.go:93] Provisioning new machine with config: &{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-830000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 15:23:51.746984   28648 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:23:51.789858   28648 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:23:51.790225   28648 start.go:159] libmachine.API.Create for "docker-flags-830000" (driver="docker")
	I1216 15:23:51.790285   28648 client.go:168] LocalClient.Create starting
	I1216 15:23:51.790459   28648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:23:51.790553   28648 main.go:141] libmachine: Decoding PEM data...
	I1216 15:23:51.790588   28648 main.go:141] libmachine: Parsing certificate...
	I1216 15:23:51.790677   28648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:23:51.790758   28648 main.go:141] libmachine: Decoding PEM data...
	I1216 15:23:51.790776   28648 main.go:141] libmachine: Parsing certificate...
	I1216 15:23:51.791838   28648 cli_runner.go:164] Run: docker network inspect docker-flags-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:23:51.842876   28648 cli_runner.go:211] docker network inspect docker-flags-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:23:51.842972   28648 network_create.go:281] running [docker network inspect docker-flags-830000] to gather additional debugging logs...
	I1216 15:23:51.842988   28648 cli_runner.go:164] Run: docker network inspect docker-flags-830000
	W1216 15:23:51.893055   28648 cli_runner.go:211] docker network inspect docker-flags-830000 returned with exit code 1
	I1216 15:23:51.893094   28648 network_create.go:284] error running [docker network inspect docker-flags-830000]: docker network inspect docker-flags-830000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-830000 not found
	I1216 15:23:51.893108   28648 network_create.go:286] output of [docker network inspect docker-flags-830000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-830000 not found
	
	** /stderr **
	I1216 15:23:51.893245   28648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:23:51.945281   28648 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:23:51.946704   28648 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:23:51.947099   28648 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002160f20}
	I1216 15:23:51.947116   28648 network_create.go:124] attempt to create docker network docker-flags-830000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1216 15:23:51.947197   28648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-830000 docker-flags-830000
	W1216 15:23:51.997381   28648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-830000 docker-flags-830000 returned with exit code 1
	W1216 15:23:51.997411   28648 network_create.go:149] failed to create docker network docker-flags-830000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-830000 docker-flags-830000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1216 15:23:51.997435   28648 network_create.go:116] failed to create docker network docker-flags-830000 192.168.67.0/24, will retry: subnet is taken
	I1216 15:23:51.999067   28648 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:23:51.999461   28648 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023195a0}
	I1216 15:23:51.999478   28648 network_create.go:124] attempt to create docker network docker-flags-830000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1216 15:23:51.999549   28648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-830000 docker-flags-830000
	I1216 15:23:52.087380   28648 network_create.go:108] docker network docker-flags-830000 192.168.76.0/24 created
	I1216 15:23:52.087430   28648 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-830000" container
	I1216 15:23:52.087546   28648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:23:52.140440   28648 cli_runner.go:164] Run: docker volume create docker-flags-830000 --label name.minikube.sigs.k8s.io=docker-flags-830000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:23:52.192073   28648 oci.go:103] Successfully created a docker volume docker-flags-830000
	I1216 15:23:52.192184   28648 cli_runner.go:164] Run: docker run --rm --name docker-flags-830000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-830000 --entrypoint /usr/bin/test -v docker-flags-830000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:23:52.542042   28648 oci.go:107] Successfully prepared a docker volume docker-flags-830000
	I1216 15:23:52.542084   28648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:23:52.542098   28648 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:23:52.542202   28648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-830000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:29:51.795785   28648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:29:51.795930   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:51.848571   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:29:51.848702   28648 retry.go:31] will retry after 196.526539ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:52.045549   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:52.099522   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:29:52.099636   28648 retry.go:31] will retry after 545.761693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:52.646265   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:52.700443   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:29:52.700534   28648 retry.go:31] will retry after 561.714559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:53.262682   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:53.317614   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	W1216 15:29:53.317715   28648 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	W1216 15:29:53.317736   28648 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:53.317799   28648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:29:53.317852   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:53.368217   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:29:53.368328   28648 retry.go:31] will retry after 311.886205ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:53.682412   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:53.736318   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:29:53.736414   28648 retry.go:31] will retry after 377.013694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:54.113858   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:54.166951   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:29:54.167048   28648 retry.go:31] will retry after 611.321927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:54.779532   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:29:54.834038   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	W1216 15:29:54.834134   28648 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	W1216 15:29:54.834155   28648 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:54.834177   28648 start.go:128] duration metric: createHost completed in 6m3.082906958s
	I1216 15:29:54.834184   28648 start.go:83] releasing machines lock for "docker-flags-830000", held for 6m3.083043965s
	W1216 15:29:54.834197   28648 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1216 15:29:54.834637   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:29:54.884773   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:54.884826   28648 delete.go:82] Unable to get host status for docker-flags-830000, assuming it has already been deleted: state: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	W1216 15:29:54.884900   28648 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1216 15:29:54.884911   28648 start.go:709] Will try again in 5 seconds ...
	I1216 15:29:59.886182   28648 start.go:365] acquiring machines lock for docker-flags-830000: {Name:mka564d65e5961aac9fb315fa66a3cb6eebb2df6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:29:59.886431   28648 start.go:369] acquired machines lock for "docker-flags-830000" in 183.833µs
	I1216 15:29:59.886471   28648 start.go:96] Skipping create...Using existing machine configuration
	I1216 15:29:59.886485   28648 fix.go:54] fixHost starting: 
	I1216 15:29:59.887005   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:29:59.940741   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:59.940787   28648 fix.go:102] recreateIfNeeded on docker-flags-830000: state= err=unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:29:59.940802   28648 fix.go:107] machineExists: false. err=machine does not exist
	I1216 15:29:59.962565   28648 out.go:177] * docker "docker-flags-830000" container is missing, will recreate.
	I1216 15:30:00.006077   28648 delete.go:124] DEMOLISHING docker-flags-830000 ...
	I1216 15:30:00.006265   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:00.059014   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	W1216 15:30:00.059080   28648 stop.go:75] unable to get state: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:00.059101   28648 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:00.059485   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:00.109694   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:00.109762   28648 delete.go:82] Unable to get host status for docker-flags-830000, assuming it has already been deleted: state: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:00.109855   28648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-830000
	W1216 15:30:00.160184   28648 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-830000 returned with exit code 1
	I1216 15:30:00.160218   28648 kic.go:371] could not find the container docker-flags-830000 to remove it. will try anyways
	I1216 15:30:00.160291   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:00.209443   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	W1216 15:30:00.209486   28648 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:00.209573   28648 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-830000 /bin/bash -c "sudo init 0"
	W1216 15:30:00.259243   28648 cli_runner.go:211] docker exec --privileged -t docker-flags-830000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 15:30:00.259274   28648 oci.go:650] error shutdown docker-flags-830000: docker exec --privileged -t docker-flags-830000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:01.260135   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:01.313774   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:01.313821   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:01.313834   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:01.313862   28648 retry.go:31] will retry after 304.733838ms: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:01.620952   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:01.672664   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:01.672723   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:01.672734   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:01.672758   28648 retry.go:31] will retry after 810.876801ms: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:02.484115   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:02.537779   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:02.537838   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:02.537847   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:02.537869   28648 retry.go:31] will retry after 1.307824482s: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:03.846803   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:03.900980   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:03.901029   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:03.901039   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:03.901065   28648 retry.go:31] will retry after 1.829988559s: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:05.731935   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:05.787056   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:05.787110   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:05.787119   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:05.787140   28648 retry.go:31] will retry after 2.23080957s: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:08.019005   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:08.072129   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:08.072184   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:08.072195   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:08.072220   28648 retry.go:31] will retry after 4.583565231s: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:12.658166   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:12.712423   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:12.712469   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:12.712483   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:12.712508   28648 retry.go:31] will retry after 7.470195237s: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:20.183903   28648 cli_runner.go:164] Run: docker container inspect docker-flags-830000 --format={{.State.Status}}
	W1216 15:30:20.237849   28648 cli_runner.go:211] docker container inspect docker-flags-830000 --format={{.State.Status}} returned with exit code 1
	I1216 15:30:20.237901   28648 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:30:20.237909   28648 oci.go:664] temporary error: container docker-flags-830000 status is  but expect it to be exited
	I1216 15:30:20.237935   28648 oci.go:88] couldn't shut down docker-flags-830000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	 
	I1216 15:30:20.238025   28648 cli_runner.go:164] Run: docker rm -f -v docker-flags-830000
	I1216 15:30:20.288533   28648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-830000
	W1216 15:30:20.338270   28648 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-830000 returned with exit code 1
	I1216 15:30:20.338392   28648 cli_runner.go:164] Run: docker network inspect docker-flags-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:30:20.389274   28648 cli_runner.go:164] Run: docker network rm docker-flags-830000
	I1216 15:30:20.489139   28648 fix.go:114] Sleeping 1 second for extra luck!
	I1216 15:30:21.489722   28648 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:30:21.511444   28648 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:30:21.511552   28648 start.go:159] libmachine.API.Create for "docker-flags-830000" (driver="docker")
	I1216 15:30:21.511580   28648 client.go:168] LocalClient.Create starting
	I1216 15:30:21.511684   28648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:30:21.511734   28648 main.go:141] libmachine: Decoding PEM data...
	I1216 15:30:21.511746   28648 main.go:141] libmachine: Parsing certificate...
	I1216 15:30:21.511788   28648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:30:21.511826   28648 main.go:141] libmachine: Decoding PEM data...
	I1216 15:30:21.511834   28648 main.go:141] libmachine: Parsing certificate...
	I1216 15:30:21.512209   28648 cli_runner.go:164] Run: docker network inspect docker-flags-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:30:21.562983   28648 cli_runner.go:211] docker network inspect docker-flags-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:30:21.563079   28648 network_create.go:281] running [docker network inspect docker-flags-830000] to gather additional debugging logs...
	I1216 15:30:21.563101   28648 cli_runner.go:164] Run: docker network inspect docker-flags-830000
	W1216 15:30:21.613144   28648 cli_runner.go:211] docker network inspect docker-flags-830000 returned with exit code 1
	I1216 15:30:21.613177   28648 network_create.go:284] error running [docker network inspect docker-flags-830000]: docker network inspect docker-flags-830000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-830000 not found
	I1216 15:30:21.613202   28648 network_create.go:286] output of [docker network inspect docker-flags-830000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-830000 not found
	
	** /stderr **
	I1216 15:30:21.613336   28648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:30:21.666407   28648 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:30:21.667998   28648 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:30:21.669295   28648 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:30:21.670663   28648 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:30:21.672217   28648 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:30:21.672591   28648 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002319910}
	I1216 15:30:21.672605   28648 network_create.go:124] attempt to create docker network docker-flags-830000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1216 15:30:21.672673   28648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-830000 docker-flags-830000
	I1216 15:30:21.758479   28648 network_create.go:108] docker network docker-flags-830000 192.168.94.0/24 created
	I1216 15:30:21.758518   28648 kic.go:121] calculated static IP "192.168.94.2" for the "docker-flags-830000" container
	I1216 15:30:21.758639   28648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:30:21.811951   28648 cli_runner.go:164] Run: docker volume create docker-flags-830000 --label name.minikube.sigs.k8s.io=docker-flags-830000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:30:21.862311   28648 oci.go:103] Successfully created a docker volume docker-flags-830000
	I1216 15:30:21.862436   28648 cli_runner.go:164] Run: docker run --rm --name docker-flags-830000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-830000 --entrypoint /usr/bin/test -v docker-flags-830000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:30:22.174787   28648 oci.go:107] Successfully prepared a docker volume docker-flags-830000
	I1216 15:30:22.174814   28648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:30:22.174827   28648 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:30:22.174935   28648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-830000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:36:21.369939   28648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:36:21.370071   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:21.421355   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:21.421446   28648 retry.go:31] will retry after 309.976689ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:21.732921   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:21.786487   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:21.786608   28648 retry.go:31] will retry after 554.331918ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:22.342923   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:22.397128   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:22.397248   28648 retry.go:31] will retry after 319.156445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:22.716784   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:22.771441   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	W1216 15:36:22.771555   28648 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	W1216 15:36:22.771578   28648 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:22.771649   28648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:36:22.771711   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:22.821409   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:22.821517   28648 retry.go:31] will retry after 265.71438ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:23.089360   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:23.140846   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:23.140944   28648 retry.go:31] will retry after 451.119736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:23.594451   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:23.648489   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:23.648597   28648 retry.go:31] will retry after 739.429703ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:24.388699   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:24.442502   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	W1216 15:36:24.442601   28648 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	W1216 15:36:24.442624   28648 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:24.442639   28648 start.go:128] duration metric: createHost completed in 6m3.095799132s
	I1216 15:36:24.442708   28648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:36:24.442762   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:24.494370   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:24.494458   28648 retry.go:31] will retry after 198.237254ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:24.693244   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:24.746274   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:24.746370   28648 retry.go:31] will retry after 363.636097ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:25.111990   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:25.165736   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:25.165828   28648 retry.go:31] will retry after 393.509661ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:25.561219   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:25.615091   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:25.615180   28648 retry.go:31] will retry after 756.404989ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:26.372222   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:26.424696   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	W1216 15:36:26.424794   28648 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	W1216 15:36:26.424816   28648 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:26.424877   28648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:36:26.424932   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:26.475266   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:26.475393   28648 retry.go:31] will retry after 352.20444ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:26.828058   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:26.881481   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:26.881572   28648 retry.go:31] will retry after 469.318617ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:27.353234   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:27.405092   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	I1216 15:36:27.405181   28648 retry.go:31] will retry after 567.727158ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:27.975393   28648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000
	W1216 15:36:28.027719   28648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000 returned with exit code 1
	W1216 15:36:28.027819   28648 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	W1216 15:36:28.027839   28648 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	I1216 15:36:28.027852   28648 fix.go:56] fixHost completed within 6m28.283983113s
	I1216 15:36:28.027858   28648 start.go:83] releasing machines lock for "docker-flags-830000", held for 6m28.284026928s
	W1216 15:36:28.027937   28648 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-830000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-830000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 15:36:28.071169   28648 out.go:177] 
	W1216 15:36:28.092220   28648 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 15:36:28.092266   28648 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1216 15:36:28.092302   28648 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1216 15:36:28.114348   28648 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (201.880937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (202.827028ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-12-16 15:36:28.594456 -0800 PST m=+6927.415619254
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-830000
helpers_test.go:235: (dbg) docker inspect docker-flags-830000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-830000",
	        "Id": "b3862c018c089ef44ec0520dd40947f9216bde0907fda910ceecd532c81bff49",
	        "Created": "2023-12-16T23:30:21.719378481Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-830000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-830000 -n docker-flags-830000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-830000 -n docker-flags-830000: exit status 7 (106.300098ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 15:36:28.753405   29164 status.go:249] status error: host: state: unknown state "docker-flags-830000": docker container inspect docker-flags-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-830000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-830000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-830000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-830000
--- FAIL: TestDockerFlags (758.74s)

                                                
                                    
x
+
TestForceSystemdFlag (751.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-603000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-603000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m30.737560837s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-603000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-603000 in cluster force-systemd-flag-603000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-603000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 15:23:19.111452   28526 out.go:296] Setting OutFile to fd 1 ...
	I1216 15:23:19.111752   28526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:23:19.111758   28526 out.go:309] Setting ErrFile to fd 2...
	I1216 15:23:19.111767   28526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:23:19.111956   28526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 15:23:19.113429   28526 out.go:303] Setting JSON to false
	I1216 15:23:19.136172   28526 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":12168,"bootTime":1702756831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 15:23:19.136286   28526 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 15:23:19.158207   28526 out.go:177] * [force-systemd-flag-603000] minikube v1.32.0 on Darwin 14.2
	I1216 15:23:19.202001   28526 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 15:23:19.202054   28526 notify.go:220] Checking for updates...
	I1216 15:23:19.245987   28526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 15:23:19.267977   28526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 15:23:19.288877   28526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 15:23:19.310049   28526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 15:23:19.332128   28526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 15:23:19.354880   28526 config.go:182] Loaded profile config "force-systemd-env-678000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 15:23:19.355058   28526 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 15:23:19.412559   28526 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 15:23:19.412764   28526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:23:19.513255   28526 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-12-16 23:23:19.502923805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:23:19.535125   28526 out.go:177] * Using the docker driver based on user configuration
	I1216 15:23:19.556044   28526 start.go:298] selected driver: docker
	I1216 15:23:19.556069   28526 start.go:902] validating driver "docker" against <nil>
	I1216 15:23:19.556083   28526 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 15:23:19.560787   28526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:23:19.658816   28526 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-12-16 23:23:19.649228423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:23:19.658988   28526 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 15:23:19.659180   28526 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 15:23:19.680751   28526 out.go:177] * Using Docker Desktop driver with root privileges
	I1216 15:23:19.703952   28526 cni.go:84] Creating CNI manager for ""
	I1216 15:23:19.703995   28526 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 15:23:19.704020   28526 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 15:23:19.704036   28526 start_flags.go:323] config:
	{Name:force-systemd-flag-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 15:23:19.747782   28526 out.go:177] * Starting control plane node force-systemd-flag-603000 in cluster force-systemd-flag-603000
	I1216 15:23:19.769910   28526 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 15:23:19.791928   28526 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 15:23:19.834922   28526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:23:19.835009   28526 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 15:23:19.835014   28526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 15:23:19.835030   28526 cache.go:56] Caching tarball of preloaded images
	I1216 15:23:19.835233   28526 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 15:23:19.835257   28526 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 15:23:19.835434   28526 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/force-systemd-flag-603000/config.json ...
	I1216 15:23:19.836155   28526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/force-systemd-flag-603000/config.json: {Name:mk0b2815658b09a571345d23e7192feb2a6d7367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 15:23:19.889392   28526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 15:23:19.889412   28526 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 15:23:19.889434   28526 cache.go:194] Successfully downloaded all kic artifacts
	I1216 15:23:19.889481   28526 start.go:365] acquiring machines lock for force-systemd-flag-603000: {Name:mk3469a5de5b1614e34707eb84ce82a7becd8f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:23:19.889663   28526 start.go:369] acquired machines lock for "force-systemd-flag-603000" in 144.682µs
	I1216 15:23:19.889697   28526 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-603000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 15:23:19.889760   28526 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:23:19.912561   28526 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:23:19.912936   28526 start.go:159] libmachine.API.Create for "force-systemd-flag-603000" (driver="docker")
	I1216 15:23:19.912981   28526 client.go:168] LocalClient.Create starting
	I1216 15:23:19.913157   28526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:23:19.913242   28526 main.go:141] libmachine: Decoding PEM data...
	I1216 15:23:19.913276   28526 main.go:141] libmachine: Parsing certificate...
	I1216 15:23:19.913379   28526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:23:19.913447   28526 main.go:141] libmachine: Decoding PEM data...
	I1216 15:23:19.913464   28526 main.go:141] libmachine: Parsing certificate...
	I1216 15:23:19.914535   28526 cli_runner.go:164] Run: docker network inspect force-systemd-flag-603000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:23:19.966418   28526 cli_runner.go:211] docker network inspect force-systemd-flag-603000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:23:19.966528   28526 network_create.go:281] running [docker network inspect force-systemd-flag-603000] to gather additional debugging logs...
	I1216 15:23:19.966547   28526 cli_runner.go:164] Run: docker network inspect force-systemd-flag-603000
	W1216 15:23:20.016563   28526 cli_runner.go:211] docker network inspect force-systemd-flag-603000 returned with exit code 1
	I1216 15:23:20.016600   28526 network_create.go:284] error running [docker network inspect force-systemd-flag-603000]: docker network inspect force-systemd-flag-603000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-603000 not found
	I1216 15:23:20.016615   28526 network_create.go:286] output of [docker network inspect force-systemd-flag-603000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-603000 not found
	
	** /stderr **
	I1216 15:23:20.016763   28526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:23:20.068832   28526 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:23:20.069213   28526 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022e5040}
	I1216 15:23:20.069229   28526 network_create.go:124] attempt to create docker network force-systemd-flag-603000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1216 15:23:20.069303   28526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-603000 force-systemd-flag-603000
	I1216 15:23:20.155659   28526 network_create.go:108] docker network force-systemd-flag-603000 192.168.58.0/24 created
	I1216 15:23:20.155705   28526 kic.go:121] calculated static IP "192.168.58.2" for the "force-systemd-flag-603000" container
	I1216 15:23:20.155823   28526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:23:20.208633   28526 cli_runner.go:164] Run: docker volume create force-systemd-flag-603000 --label name.minikube.sigs.k8s.io=force-systemd-flag-603000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:23:20.260622   28526 oci.go:103] Successfully created a docker volume force-systemd-flag-603000
	I1216 15:23:20.260744   28526 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-603000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-603000 --entrypoint /usr/bin/test -v force-systemd-flag-603000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:23:20.638305   28526 oci.go:107] Successfully prepared a docker volume force-systemd-flag-603000
	I1216 15:23:20.638348   28526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:23:20.638362   28526 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:23:20.638460   28526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-603000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:29:19.919687   28526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:29:19.919830   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:19.972299   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:19.972432   28526 retry.go:31] will retry after 151.275003ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:20.126132   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:20.180341   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:20.180457   28526 retry.go:31] will retry after 416.073069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:20.597427   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:20.649897   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:20.649990   28526 retry.go:31] will retry after 530.004556ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:21.181425   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:21.232786   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	W1216 15:29:21.232893   28526 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	W1216 15:29:21.232923   28526 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:21.232978   28526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:29:21.233036   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:21.283161   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:21.283255   28526 retry.go:31] will retry after 173.026612ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:21.456951   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:21.512617   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:21.512731   28526 retry.go:31] will retry after 391.498375ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:21.904773   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:21.959207   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:21.959301   28526 retry.go:31] will retry after 367.091424ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:22.327133   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:29:22.381302   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	W1216 15:29:22.381415   28526 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	W1216 15:29:22.381434   28526 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:22.381448   28526 start.go:128] duration metric: createHost completed in 6m2.487410882s
	I1216 15:29:22.381456   28526 start.go:83] releasing machines lock for "force-systemd-flag-603000", held for 6m2.487517658s
	W1216 15:29:22.381469   28526 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1216 15:29:22.381928   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:22.433693   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:22.433745   28526 delete.go:82] Unable to get host status for force-systemd-flag-603000, assuming it has already been deleted: state: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	W1216 15:29:22.433823   28526 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1216 15:29:22.433836   28526 start.go:709] Will try again in 5 seconds ...
	I1216 15:29:27.436086   28526 start.go:365] acquiring machines lock for force-systemd-flag-603000: {Name:mk3469a5de5b1614e34707eb84ce82a7becd8f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:29:27.437023   28526 start.go:369] acquired machines lock for "force-systemd-flag-603000" in 878.825µs
	I1216 15:29:27.437134   28526 start.go:96] Skipping create...Using existing machine configuration
	I1216 15:29:27.437156   28526 fix.go:54] fixHost starting: 
	I1216 15:29:27.437656   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:27.492782   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:27.492832   28526 fix.go:102] recreateIfNeeded on force-systemd-flag-603000: state= err=unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:27.492850   28526 fix.go:107] machineExists: false. err=machine does not exist
	I1216 15:29:27.515115   28526 out.go:177] * docker "force-systemd-flag-603000" container is missing, will recreate.
	I1216 15:29:27.541365   28526 delete.go:124] DEMOLISHING force-systemd-flag-603000 ...
	I1216 15:29:27.541557   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:27.593653   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	W1216 15:29:27.593700   28526 stop.go:75] unable to get state: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:27.593719   28526 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:27.594096   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:27.644051   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:27.644120   28526 delete.go:82] Unable to get host status for force-systemd-flag-603000, assuming it has already been deleted: state: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:27.644211   28526 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-603000
	W1216 15:29:27.694096   28526 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:27.694133   28526 kic.go:371] could not find the container force-systemd-flag-603000 to remove it. will try anyways
	I1216 15:29:27.694209   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:27.744670   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	W1216 15:29:27.744720   28526 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:27.744829   28526 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-603000 /bin/bash -c "sudo init 0"
	W1216 15:29:27.795051   28526 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-603000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 15:29:27.795095   28526 oci.go:650] error shutdown force-systemd-flag-603000: docker exec --privileged -t force-systemd-flag-603000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:28.795800   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:28.849069   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:28.849141   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:28.849153   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:28.849177   28526 retry.go:31] will retry after 253.077249ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:29.104614   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:29.156936   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:29.156988   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:29.157003   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:29.157025   28526 retry.go:31] will retry after 567.108452ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:29.725497   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:29.779294   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:29.779348   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:29.779363   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:29.779386   28526 retry.go:31] will retry after 745.051548ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:30.526771   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:30.579037   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:30.579091   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:30.579100   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:30.579123   28526 retry.go:31] will retry after 1.11633112s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:31.696230   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:31.749442   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:31.749490   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:31.749500   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:31.749522   28526 retry.go:31] will retry after 1.497106077s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:33.248196   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:33.302189   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:33.302246   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:33.302258   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:33.302281   28526 retry.go:31] will retry after 3.529102538s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:36.831692   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:36.884842   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:36.884902   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:36.884916   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:36.884940   28526 retry.go:31] will retry after 5.371065872s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:42.258403   28526 cli_runner.go:164] Run: docker container inspect force-systemd-flag-603000 --format={{.State.Status}}
	W1216 15:29:42.310966   28526 cli_runner.go:211] docker container inspect force-systemd-flag-603000 --format={{.State.Status}} returned with exit code 1
	I1216 15:29:42.311018   28526 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:29:42.311026   28526 oci.go:664] temporary error: container force-systemd-flag-603000 status is  but expect it to be exited
	I1216 15:29:42.311057   28526 oci.go:88] couldn't shut down force-systemd-flag-603000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	 
	I1216 15:29:42.311138   28526 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-603000
	I1216 15:29:42.361729   28526 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-603000
	W1216 15:29:42.411602   28526 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:42.411718   28526 cli_runner.go:164] Run: docker network inspect force-systemd-flag-603000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:29:42.463775   28526 cli_runner.go:164] Run: docker network rm force-systemd-flag-603000
	I1216 15:29:42.569785   28526 fix.go:114] Sleeping 1 second for extra luck!
	I1216 15:29:43.570675   28526 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:29:43.593627   28526 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:29:43.593796   28526 start.go:159] libmachine.API.Create for "force-systemd-flag-603000" (driver="docker")
	I1216 15:29:43.593836   28526 client.go:168] LocalClient.Create starting
	I1216 15:29:43.594051   28526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:29:43.594146   28526 main.go:141] libmachine: Decoding PEM data...
	I1216 15:29:43.594174   28526 main.go:141] libmachine: Parsing certificate...
	I1216 15:29:43.594253   28526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:29:43.594322   28526 main.go:141] libmachine: Decoding PEM data...
	I1216 15:29:43.594340   28526 main.go:141] libmachine: Parsing certificate...
	I1216 15:29:43.595415   28526 cli_runner.go:164] Run: docker network inspect force-systemd-flag-603000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:29:43.648417   28526 cli_runner.go:211] docker network inspect force-systemd-flag-603000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:29:43.648514   28526 network_create.go:281] running [docker network inspect force-systemd-flag-603000] to gather additional debugging logs...
	I1216 15:29:43.648534   28526 cli_runner.go:164] Run: docker network inspect force-systemd-flag-603000
	W1216 15:29:43.699543   28526 cli_runner.go:211] docker network inspect force-systemd-flag-603000 returned with exit code 1
	I1216 15:29:43.699570   28526 network_create.go:284] error running [docker network inspect force-systemd-flag-603000]: docker network inspect force-systemd-flag-603000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-603000 not found
	I1216 15:29:43.699584   28526 network_create.go:286] output of [docker network inspect force-systemd-flag-603000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-603000 not found
	
	** /stderr **
	I1216 15:29:43.699734   28526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:29:43.752159   28526 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:29:43.753762   28526 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:29:43.755368   28526 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:29:43.756827   28526 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:29:43.757205   28526 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021aa180}
	I1216 15:29:43.757218   28526 network_create.go:124] attempt to create docker network force-systemd-flag-603000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1216 15:29:43.757289   28526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-603000 force-systemd-flag-603000
	I1216 15:29:43.842875   28526 network_create.go:108] docker network force-systemd-flag-603000 192.168.85.0/24 created
	I1216 15:29:43.842916   28526 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-603000" container
	I1216 15:29:43.843042   28526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:29:43.896396   28526 cli_runner.go:164] Run: docker volume create force-systemd-flag-603000 --label name.minikube.sigs.k8s.io=force-systemd-flag-603000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:29:43.946230   28526 oci.go:103] Successfully created a docker volume force-systemd-flag-603000
	I1216 15:29:43.946377   28526 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-603000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-603000 --entrypoint /usr/bin/test -v force-systemd-flag-603000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:29:44.275684   28526 oci.go:107] Successfully prepared a docker volume force-systemd-flag-603000
	I1216 15:29:44.275720   28526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:29:44.275732   28526 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:29:44.275865   28526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-603000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:35:43.598364   28526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:35:43.598454   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:43.650747   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:43.650868   28526 retry.go:31] will retry after 298.336482ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:43.951556   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:44.006809   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:44.006934   28526 retry.go:31] will retry after 489.602396ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:44.497454   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:44.552934   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:44.553042   28526 retry.go:31] will retry after 514.810615ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:45.069111   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:45.122288   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	W1216 15:35:45.122402   28526 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	W1216 15:35:45.122420   28526 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:45.122485   28526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:35:45.122557   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:45.172334   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:45.172430   28526 retry.go:31] will retry after 240.901816ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:45.415329   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:45.522797   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:45.522900   28526 retry.go:31] will retry after 208.553247ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:45.732799   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:45.787504   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:45.787602   28526 retry.go:31] will retry after 840.700321ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:46.628858   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:46.684444   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	W1216 15:35:46.684547   28526 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	W1216 15:35:46.684560   28526 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:46.684579   28526 start.go:128] duration metric: createHost completed in 6m3.109606794s
	I1216 15:35:46.684649   28526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:35:46.684719   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:46.735853   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:46.735942   28526 retry.go:31] will retry after 336.594733ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:47.074838   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:47.128982   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:47.129102   28526 retry.go:31] will retry after 334.2568ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:47.464175   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:47.514319   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:47.514422   28526 retry.go:31] will retry after 826.00726ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:48.342625   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:48.397977   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	W1216 15:35:48.398087   28526 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	W1216 15:35:48.398127   28526 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:48.398186   28526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:35:48.398273   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:48.449275   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:48.449371   28526 retry.go:31] will retry after 177.796127ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:48.629606   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:48.681036   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:48.681132   28526 retry.go:31] will retry after 500.108709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:49.182639   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:49.236749   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	I1216 15:35:49.236839   28526 retry.go:31] will retry after 356.784317ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:49.595991   28526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000
	W1216 15:35:49.649945   28526 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000 returned with exit code 1
	W1216 15:35:49.650051   28526 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	W1216 15:35:49.650073   28526 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-603000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-603000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	I1216 15:35:49.650085   28526 fix.go:56] fixHost completed within 6m22.208439664s
	I1216 15:35:49.650093   28526 start.go:83] releasing machines lock for "force-systemd-flag-603000", held for 6m22.208497258s
	W1216 15:35:49.650168   28526 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-603000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-603000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 15:35:49.693849   28526 out.go:177] 
	W1216 15:35:49.715966   28526 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 15:35:49.716022   28526 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1216 15:35:49.716051   28526 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1216 15:35:49.737588   28526 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-603000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-603000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-603000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (204.376026ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-603000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-12-16 15:35:50.019277 -0800 PST m=+6888.693712682
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-603000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-603000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-603000",
	        "Id": "3cf6f04a27f43b1868003c2c7931771dbdc9b422f9e8d5651fe629a8552f5292",
	        "Created": "2023-12-16T23:29:43.803686747Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-603000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-603000 -n force-systemd-flag-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-603000 -n force-systemd-flag-603000: exit status 7 (108.172491ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 15:35:50.179739   29040 status.go:249] status error: host: state: unknown state "force-systemd-flag-603000": docker container inspect force-systemd-flag-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-603000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-603000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-603000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-603000
--- FAIL: TestForceSystemdFlag (751.84s)

                                                
                                    
x
+
TestForceSystemdEnv (755.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-678000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1216 15:11:56.882171   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:12:27.720880   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 15:14:59.937844   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:16:56.887106   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:17:27.725291   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 15:20:30.786182   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 15:21:56.889301   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:22:27.728706   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-678000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.746942451s)

                                                
                                                
-- stdout --
	* [force-systemd-env-678000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-678000 in cluster force-systemd-env-678000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-678000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 15:11:15.010148   27896 out.go:296] Setting OutFile to fd 1 ...
	I1216 15:11:15.010347   27896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:11:15.010352   27896 out.go:309] Setting ErrFile to fd 2...
	I1216 15:11:15.010356   27896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 15:11:15.010544   27896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 15:11:15.011965   27896 out.go:303] Setting JSON to false
	I1216 15:11:15.034842   27896 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11444,"bootTime":1702756831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 15:11:15.035044   27896 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 15:11:15.057208   27896 out.go:177] * [force-systemd-env-678000] minikube v1.32.0 on Darwin 14.2
	I1216 15:11:15.099874   27896 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 15:11:15.099924   27896 notify.go:220] Checking for updates...
	I1216 15:11:15.142845   27896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 15:11:15.164800   27896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 15:11:15.186920   27896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 15:11:15.208843   27896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 15:11:15.229938   27896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1216 15:11:15.252725   27896 config.go:182] Loaded profile config "offline-docker-716000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 15:11:15.252878   27896 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 15:11:15.309731   27896 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 15:11:15.309879   27896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:11:15.412116   27896 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-12-16 23:11:15.401530758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:11:15.454556   27896 out.go:177] * Using the docker driver based on user configuration
	I1216 15:11:15.476422   27896 start.go:298] selected driver: docker
	I1216 15:11:15.476464   27896 start.go:902] validating driver "docker" against <nil>
	I1216 15:11:15.476478   27896 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 15:11:15.481051   27896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 15:11:15.580331   27896 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-12-16 23:11:15.570291615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 15:11:15.580507   27896 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 15:11:15.580689   27896 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 15:11:15.602543   27896 out.go:177] * Using Docker Desktop driver with root privileges
	I1216 15:11:15.624617   27896 cni.go:84] Creating CNI manager for ""
	I1216 15:11:15.624658   27896 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 15:11:15.624688   27896 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 15:11:15.624702   27896 start_flags.go:323] config:
	{Name:force-systemd-env-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 15:11:15.646702   27896 out.go:177] * Starting control plane node force-systemd-env-678000 in cluster force-systemd-env-678000
	I1216 15:11:15.689402   27896 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 15:11:15.711540   27896 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 15:11:15.755668   27896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:11:15.755751   27896 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 15:11:15.755769   27896 cache.go:56] Caching tarball of preloaded images
	I1216 15:11:15.755793   27896 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 15:11:15.755982   27896 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 15:11:15.755999   27896 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 15:11:15.756088   27896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/force-systemd-env-678000/config.json ...
	I1216 15:11:15.756172   27896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/force-systemd-env-678000/config.json: {Name:mkffb13bc6393ab75047b91590c997c74de7de11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 15:11:15.807592   27896 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 15:11:15.807621   27896 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 15:11:15.807651   27896 cache.go:194] Successfully downloaded all kic artifacts
	I1216 15:11:15.807697   27896 start.go:365] acquiring machines lock for force-systemd-env-678000: {Name:mk1c8d69fe8c90ea85c5c1fc8cabe1e6843402b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:11:15.807850   27896 start.go:369] acquired machines lock for "force-systemd-env-678000" in 141.172µs
	I1216 15:11:15.807875   27896 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-678000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 15:11:15.807976   27896 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:11:15.829993   27896 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:11:15.830365   27896 start.go:159] libmachine.API.Create for "force-systemd-env-678000" (driver="docker")
	I1216 15:11:15.830415   27896 client.go:168] LocalClient.Create starting
	I1216 15:11:15.830654   27896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:11:15.830750   27896 main.go:141] libmachine: Decoding PEM data...
	I1216 15:11:15.830784   27896 main.go:141] libmachine: Parsing certificate...
	I1216 15:11:15.830878   27896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:11:15.830958   27896 main.go:141] libmachine: Decoding PEM data...
	I1216 15:11:15.830973   27896 main.go:141] libmachine: Parsing certificate...
	I1216 15:11:15.831904   27896 cli_runner.go:164] Run: docker network inspect force-systemd-env-678000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:11:15.882654   27896 cli_runner.go:211] docker network inspect force-systemd-env-678000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:11:15.882751   27896 network_create.go:281] running [docker network inspect force-systemd-env-678000] to gather additional debugging logs...
	I1216 15:11:15.882769   27896 cli_runner.go:164] Run: docker network inspect force-systemd-env-678000
	W1216 15:11:15.932577   27896 cli_runner.go:211] docker network inspect force-systemd-env-678000 returned with exit code 1
	I1216 15:11:15.932618   27896 network_create.go:284] error running [docker network inspect force-systemd-env-678000]: docker network inspect force-systemd-env-678000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-678000 not found
	I1216 15:11:15.932631   27896 network_create.go:286] output of [docker network inspect force-systemd-env-678000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-678000 not found
	
	** /stderr **
	I1216 15:11:15.932755   27896 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:11:15.984495   27896 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:11:15.986124   27896 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:11:15.986523   27896 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00231ca80}
	I1216 15:11:15.986539   27896 network_create.go:124] attempt to create docker network force-systemd-env-678000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1216 15:11:15.986610   27896 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-678000 force-systemd-env-678000
	W1216 15:11:16.037017   27896 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-678000 force-systemd-env-678000 returned with exit code 1
	W1216 15:11:16.037054   27896 network_create.go:149] failed to create docker network force-systemd-env-678000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-678000 force-systemd-env-678000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1216 15:11:16.037072   27896 network_create.go:116] failed to create docker network force-systemd-env-678000 192.168.67.0/24, will retry: subnet is taken
	I1216 15:11:16.038472   27896 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:11:16.038879   27896 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00231db30}
	I1216 15:11:16.038891   27896 network_create.go:124] attempt to create docker network force-systemd-env-678000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1216 15:11:16.038954   27896 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-678000 force-systemd-env-678000
	I1216 15:11:16.126429   27896 network_create.go:108] docker network force-systemd-env-678000 192.168.76.0/24 created
	I1216 15:11:16.126483   27896 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-678000" container
	I1216 15:11:16.126599   27896 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:11:16.178917   27896 cli_runner.go:164] Run: docker volume create force-systemd-env-678000 --label name.minikube.sigs.k8s.io=force-systemd-env-678000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:11:16.230783   27896 oci.go:103] Successfully created a docker volume force-systemd-env-678000
	I1216 15:11:16.230900   27896 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-678000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-678000 --entrypoint /usr/bin/test -v force-systemd-env-678000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:11:16.596092   27896 oci.go:107] Successfully prepared a docker volume force-systemd-env-678000
	I1216 15:11:16.596129   27896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:11:16.596144   27896 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:11:16.596266   27896 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-678000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:17:15.837722   27896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:17:15.837866   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:15.892878   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:17:15.893014   27896 retry.go:31] will retry after 341.779025ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:16.235551   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:16.290230   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:17:16.290332   27896 retry.go:31] will retry after 371.937536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:16.662934   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:16.715194   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:17:16.715282   27896 retry.go:31] will retry after 328.902934ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:17.046591   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:17.099407   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	W1216 15:17:17.099540   27896 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	W1216 15:17:17.099566   27896 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:17.099623   27896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:17:17.099689   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:17.150407   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:17:17.150501   27896 retry.go:31] will retry after 129.983936ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:17.282810   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:17.334272   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:17:17.334372   27896 retry.go:31] will retry after 452.382144ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:17.788438   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:17.842921   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:17:17.843034   27896 retry.go:31] will retry after 833.979539ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:18.679424   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:17:18.733924   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	W1216 15:17:18.734042   27896 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	W1216 15:17:18.734058   27896 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:18.734073   27896 start.go:128] duration metric: createHost completed in 6m2.920199152s
	I1216 15:17:18.734081   27896 start.go:83] releasing machines lock for "force-systemd-env-678000", held for 6m2.920336331s
	W1216 15:17:18.734093   27896 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1216 15:17:18.734540   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:18.784768   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:18.784822   27896 delete.go:82] Unable to get host status for force-systemd-env-678000, assuming it has already been deleted: state: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	W1216 15:17:18.784898   27896 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1216 15:17:18.784910   27896 start.go:709] Will try again in 5 seconds ...
	I1216 15:17:23.786596   27896 start.go:365] acquiring machines lock for force-systemd-env-678000: {Name:mk1c8d69fe8c90ea85c5c1fc8cabe1e6843402b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 15:17:23.786847   27896 start.go:369] acquired machines lock for "force-systemd-env-678000" in 210.223µs
	I1216 15:17:23.786889   27896 start.go:96] Skipping create...Using existing machine configuration
	I1216 15:17:23.786903   27896 fix.go:54] fixHost starting: 
	I1216 15:17:23.787381   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:23.839661   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:23.839702   27896 fix.go:102] recreateIfNeeded on force-systemd-env-678000: state= err=unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:23.839721   27896 fix.go:107] machineExists: false. err=machine does not exist
	I1216 15:17:23.861600   27896 out.go:177] * docker "force-systemd-env-678000" container is missing, will recreate.
	I1216 15:17:23.904338   27896 delete.go:124] DEMOLISHING force-systemd-env-678000 ...
	I1216 15:17:23.904542   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:23.956523   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	W1216 15:17:23.956583   27896 stop.go:75] unable to get state: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:23.956605   27896 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:23.956991   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:24.006818   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:24.006882   27896 delete.go:82] Unable to get host status for force-systemd-env-678000, assuming it has already been deleted: state: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:24.006976   27896 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-678000
	W1216 15:17:24.057307   27896 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-678000 returned with exit code 1
	I1216 15:17:24.057351   27896 kic.go:371] could not find the container force-systemd-env-678000 to remove it. will try anyways
	I1216 15:17:24.057441   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:24.107874   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	W1216 15:17:24.107925   27896 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:24.108009   27896 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-678000 /bin/bash -c "sudo init 0"
	W1216 15:17:24.157452   27896 cli_runner.go:211] docker exec --privileged -t force-systemd-env-678000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 15:17:24.157484   27896 oci.go:650] error shutdown force-systemd-env-678000: docker exec --privileged -t force-systemd-env-678000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:25.159786   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:25.212673   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:25.212736   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:25.212750   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:25.212775   27896 retry.go:31] will retry after 398.874201ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:25.612402   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:25.664932   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:25.664987   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:25.665001   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:25.665026   27896 retry.go:31] will retry after 539.905781ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:26.206430   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:26.257860   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:26.257909   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:26.257919   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:26.257943   27896 retry.go:31] will retry after 1.592177695s: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:27.850873   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:27.904393   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:27.904441   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:27.904453   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:27.904480   27896 retry.go:31] will retry after 2.208610595s: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:30.115502   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:30.170098   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:30.170150   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:30.170162   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:30.170188   27896 retry.go:31] will retry after 2.695616257s: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:32.866054   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:32.916399   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:32.916460   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:32.916472   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:32.916502   27896 retry.go:31] will retry after 3.403994909s: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:36.322802   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:36.376774   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:36.376828   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:36.376840   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:36.376864   27896 retry.go:31] will retry after 5.201165044s: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:41.579705   27896 cli_runner.go:164] Run: docker container inspect force-systemd-env-678000 --format={{.State.Status}}
	W1216 15:17:41.635909   27896 cli_runner.go:211] docker container inspect force-systemd-env-678000 --format={{.State.Status}} returned with exit code 1
	I1216 15:17:41.635959   27896 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:17:41.635973   27896 oci.go:664] temporary error: container force-systemd-env-678000 status is  but expect it to be exited
	I1216 15:17:41.636004   27896 oci.go:88] couldn't shut down force-systemd-env-678000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	 
	I1216 15:17:41.636083   27896 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-678000
	I1216 15:17:41.686573   27896 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-678000
	W1216 15:17:41.736119   27896 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-678000 returned with exit code 1
	I1216 15:17:41.736228   27896 cli_runner.go:164] Run: docker network inspect force-systemd-env-678000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:17:41.787000   27896 cli_runner.go:164] Run: docker network rm force-systemd-env-678000
	I1216 15:17:41.886448   27896 fix.go:114] Sleeping 1 second for extra luck!
	I1216 15:17:42.886626   27896 start.go:125] createHost starting for "" (driver="docker")
	I1216 15:17:42.908343   27896 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1216 15:17:42.908448   27896 start.go:159] libmachine.API.Create for "force-systemd-env-678000" (driver="docker")
	I1216 15:17:42.908479   27896 client.go:168] LocalClient.Create starting
	I1216 15:17:42.908587   27896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 15:17:42.908667   27896 main.go:141] libmachine: Decoding PEM data...
	I1216 15:17:42.908678   27896 main.go:141] libmachine: Parsing certificate...
	I1216 15:17:42.908736   27896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 15:17:42.908785   27896 main.go:141] libmachine: Decoding PEM data...
	I1216 15:17:42.908793   27896 main.go:141] libmachine: Parsing certificate...
	I1216 15:17:42.929600   27896 cli_runner.go:164] Run: docker network inspect force-systemd-env-678000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 15:17:42.980620   27896 cli_runner.go:211] docker network inspect force-systemd-env-678000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 15:17:42.980742   27896 network_create.go:281] running [docker network inspect force-systemd-env-678000] to gather additional debugging logs...
	I1216 15:17:42.980760   27896 cli_runner.go:164] Run: docker network inspect force-systemd-env-678000
	W1216 15:17:43.030995   27896 cli_runner.go:211] docker network inspect force-systemd-env-678000 returned with exit code 1
	I1216 15:17:43.031030   27896 network_create.go:284] error running [docker network inspect force-systemd-env-678000]: docker network inspect force-systemd-env-678000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-678000 not found
	I1216 15:17:43.031045   27896 network_create.go:286] output of [docker network inspect force-systemd-env-678000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-678000 not found
	
	** /stderr **
	I1216 15:17:43.031191   27896 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 15:17:43.082459   27896 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:43.084047   27896 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:43.085584   27896 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:43.087144   27896 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:43.088554   27896 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 15:17:43.088966   27896 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002255f60}
	I1216 15:17:43.088985   27896 network_create.go:124] attempt to create docker network force-systemd-env-678000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1216 15:17:43.089057   27896 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-678000 force-systemd-env-678000
	I1216 15:17:43.176552   27896 network_create.go:108] docker network force-systemd-env-678000 192.168.94.0/24 created
	I1216 15:17:43.176593   27896 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-678000" container
	I1216 15:17:43.176705   27896 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 15:17:43.229396   27896 cli_runner.go:164] Run: docker volume create force-systemd-env-678000 --label name.minikube.sigs.k8s.io=force-systemd-env-678000 --label created_by.minikube.sigs.k8s.io=true
	I1216 15:17:43.279535   27896 oci.go:103] Successfully created a docker volume force-systemd-env-678000
	I1216 15:17:43.279662   27896 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-678000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-678000 --entrypoint /usr/bin/test -v force-systemd-env-678000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 15:17:43.581780   27896 oci.go:107] Successfully prepared a docker volume force-systemd-env-678000
	I1216 15:17:43.581820   27896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 15:17:43.581833   27896 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 15:17:43.581938   27896 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-678000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 15:23:42.915161   27896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:23:42.915299   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:42.967460   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:42.967586   27896 retry.go:31] will retry after 131.754322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:43.101706   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:43.152863   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:43.152989   27896 retry.go:31] will retry after 524.262319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:43.678816   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:43.731363   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:43.731478   27896 retry.go:31] will retry after 708.666749ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:44.440795   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:44.494182   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	W1216 15:23:44.494295   27896 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	W1216 15:23:44.494312   27896 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:44.494371   27896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:23:44.494431   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:44.546134   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:44.546234   27896 retry.go:31] will retry after 203.700717ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:44.751828   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:44.805028   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:44.805126   27896 retry.go:31] will retry after 195.934632ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:45.002062   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:45.054495   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:45.054593   27896 retry.go:31] will retry after 712.165627ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:45.767735   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:45.821271   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	W1216 15:23:45.821379   27896 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	W1216 15:23:45.821393   27896 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:45.821413   27896 start.go:128] duration metric: createHost completed in 6m2.930500456s
	I1216 15:23:45.821479   27896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 15:23:45.821534   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:45.871003   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:45.871098   27896 retry.go:31] will retry after 280.836877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:46.154294   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:46.206110   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:46.206209   27896 retry.go:31] will retry after 439.711038ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:46.646678   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:46.701557   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:46.701666   27896 retry.go:31] will retry after 834.573213ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:47.538774   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:47.593236   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	W1216 15:23:47.593349   27896 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	W1216 15:23:47.593371   27896 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:47.593429   27896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 15:23:47.593499   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:47.644362   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:47.644461   27896 retry.go:31] will retry after 172.27085ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:47.817558   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:47.869115   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:47.869210   27896 retry.go:31] will retry after 406.168528ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:48.275975   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:48.330186   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:48.330280   27896 retry.go:31] will retry after 285.950261ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:48.618606   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:48.670278   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	I1216 15:23:48.670375   27896 retry.go:31] will retry after 823.518946ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:49.494365   27896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000
	W1216 15:23:49.545529   27896 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000 returned with exit code 1
	W1216 15:23:49.545633   27896 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	W1216 15:23:49.545649   27896 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-678000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-678000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	I1216 15:23:49.545669   27896 fix.go:56] fixHost completed within 6m25.754228815s
	I1216 15:23:49.545678   27896 start.go:83] releasing machines lock for "force-systemd-env-678000", held for 6m25.754278367s
	W1216 15:23:49.545755   27896 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-678000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-678000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 15:23:49.589179   27896 out.go:177] 
	W1216 15:23:49.610037   27896 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 15:23:49.610080   27896 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1216 15:23:49.610124   27896 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1216 15:23:49.631129   27896 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-678000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-678000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-678000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (208.921937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-678000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-12-16 15:23:49.935894 -0800 PST m=+6168.618805960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-678000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-678000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-678000",
	        "Id": "81ec868f9e801d07932a159e4165f2783485485a59d1194e29bf2d3d7f20a717",
	        "Created": "2023-12-16T23:17:43.137085521Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-678000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-678000 -n force-systemd-env-678000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-678000 -n force-systemd-env-678000: exit status 7 (107.138191ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 15:23:50.096964   28624 status.go:249] status error: host: state: unknown state "force-systemd-env-678000": docker container inspect force-systemd-env-678000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-678000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-678000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-678000
--- FAIL: TestForceSystemdEnv (755.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (263.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-709000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1216 13:56:56.723428   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:57:24.410715   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:57:27.561774   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:27.567494   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:27.577955   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:27.599560   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:27.640600   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:27.721231   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:27.882034   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:28.202924   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:28.845213   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:30.125833   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:32.686675   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:37.808381   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:57:48.049736   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:58:08.531161   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 13:58:49.492630   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-709000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m23.44395894s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-709000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-709000 in cluster ingress-addon-legacy-709000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 13:54:58.703338   23286 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:54:58.703663   23286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:54:58.703670   23286 out.go:309] Setting ErrFile to fd 2...
	I1216 13:54:58.703674   23286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:54:58.703864   23286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 13:54:58.705295   23286 out.go:303] Setting JSON to false
	I1216 13:54:58.729987   23286 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6867,"bootTime":1702756831,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 13:54:58.730076   23286 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 13:54:58.751966   23286 out.go:177] * [ingress-addon-legacy-709000] minikube v1.32.0 on Darwin 14.2
	I1216 13:54:58.794784   23286 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 13:54:58.816874   23286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 13:54:58.794847   23286 notify.go:220] Checking for updates...
	I1216 13:54:58.838851   23286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 13:54:58.859724   23286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 13:54:58.880785   23286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 13:54:58.902017   23286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 13:54:58.924311   23286 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 13:54:58.980998   23286 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 13:54:58.981154   23286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:54:59.085197   23286 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:54:59.075146896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:54:59.127536   23286 out.go:177] * Using the docker driver based on user configuration
	I1216 13:54:59.148758   23286 start.go:298] selected driver: docker
	I1216 13:54:59.148782   23286 start.go:902] validating driver "docker" against <nil>
	I1216 13:54:59.148799   23286 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 13:54:59.153765   23286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:54:59.253232   23286 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:54:59.243642023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:54:59.253415   23286 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 13:54:59.253593   23286 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 13:54:59.274839   23286 out.go:177] * Using Docker Desktop driver with root privileges
	I1216 13:54:59.295784   23286 cni.go:84] Creating CNI manager for ""
	I1216 13:54:59.295807   23286 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 13:54:59.295816   23286 start_flags.go:323] config:
	{Name:ingress-addon-legacy-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-709000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:54:59.316689   23286 out.go:177] * Starting control plane node ingress-addon-legacy-709000 in cluster ingress-addon-legacy-709000
	I1216 13:54:59.359762   23286 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 13:54:59.381383   23286 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 13:54:59.402667   23286 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1216 13:54:59.402763   23286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 13:54:59.454530   23286 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 13:54:59.454568   23286 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 13:54:59.458653   23286 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1216 13:54:59.458678   23286 cache.go:56] Caching tarball of preloaded images
	I1216 13:54:59.459263   23286 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1216 13:54:59.480570   23286 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1216 13:54:59.522403   23286 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:54:59.597206   23286 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1216 13:55:05.552800   23286 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:55:05.552992   23286 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:55:06.190656   23286 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1216 13:55:06.190917   23286 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/config.json ...
	I1216 13:55:06.190944   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/config.json: {Name:mk0ef682e03816d2f5ef27f07d0b4685a0bfa0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:06.191538   23286 cache.go:194] Successfully downloaded all kic artifacts
	I1216 13:55:06.191570   23286 start.go:365] acquiring machines lock for ingress-addon-legacy-709000: {Name:mk68f6b9c8d6eca87d51d1868aca2a10c2c769c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 13:55:06.192136   23286 start.go:369] acquired machines lock for "ingress-addon-legacy-709000" in 555.3µs
	I1216 13:55:06.192161   23286 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-709000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 13:55:06.192215   23286 start.go:125] createHost starting for "" (driver="docker")
	I1216 13:55:06.213879   23286 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 13:55:06.214164   23286 start.go:159] libmachine.API.Create for "ingress-addon-legacy-709000" (driver="docker")
	I1216 13:55:06.214244   23286 client.go:168] LocalClient.Create starting
	I1216 13:55:06.214416   23286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 13:55:06.214484   23286 main.go:141] libmachine: Decoding PEM data...
	I1216 13:55:06.214505   23286 main.go:141] libmachine: Parsing certificate...
	I1216 13:55:06.214575   23286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 13:55:06.214629   23286 main.go:141] libmachine: Decoding PEM data...
	I1216 13:55:06.214649   23286 main.go:141] libmachine: Parsing certificate...
	I1216 13:55:06.215393   23286 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-709000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 13:55:06.271142   23286 cli_runner.go:211] docker network inspect ingress-addon-legacy-709000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 13:55:06.271264   23286 network_create.go:281] running [docker network inspect ingress-addon-legacy-709000] to gather additional debugging logs...
	I1216 13:55:06.271284   23286 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-709000
	W1216 13:55:06.321283   23286 cli_runner.go:211] docker network inspect ingress-addon-legacy-709000 returned with exit code 1
	I1216 13:55:06.321317   23286 network_create.go:284] error running [docker network inspect ingress-addon-legacy-709000]: docker network inspect ingress-addon-legacy-709000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-709000 not found
	I1216 13:55:06.321334   23286 network_create.go:286] output of [docker network inspect ingress-addon-legacy-709000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-709000 not found
	
	** /stderr **
	I1216 13:55:06.321480   23286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 13:55:06.372565   23286 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00081a630}
	I1216 13:55:06.372599   23286 network_create.go:124] attempt to create docker network ingress-addon-legacy-709000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1216 13:55:06.372687   23286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-709000 ingress-addon-legacy-709000
	I1216 13:55:06.459033   23286 network_create.go:108] docker network ingress-addon-legacy-709000 192.168.49.0/24 created
	I1216 13:55:06.459082   23286 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-709000" container
	I1216 13:55:06.459205   23286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 13:55:06.511322   23286 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-709000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-709000 --label created_by.minikube.sigs.k8s.io=true
	I1216 13:55:06.563967   23286 oci.go:103] Successfully created a docker volume ingress-addon-legacy-709000
	I1216 13:55:06.564132   23286 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-709000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-709000 --entrypoint /usr/bin/test -v ingress-addon-legacy-709000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 13:55:06.963471   23286 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-709000
	I1216 13:55:06.963515   23286 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1216 13:55:06.963531   23286 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 13:55:06.963649   23286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-709000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 13:55:09.525626   23286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-709000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir: (2.561879219s)
	I1216 13:55:09.525655   23286 kic.go:203] duration metric: took 2.562091 seconds to extract preloaded images to volume
	I1216 13:55:09.525782   23286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 13:55:09.627477   23286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-709000 --name ingress-addon-legacy-709000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-709000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-709000 --network ingress-addon-legacy-709000 --ip 192.168.49.2 --volume ingress-addon-legacy-709000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5
	I1216 13:55:09.894792   23286 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Running}}
	I1216 13:55:09.952274   23286 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 13:55:10.012389   23286 cli_runner.go:164] Run: docker exec ingress-addon-legacy-709000 stat /var/lib/dpkg/alternatives/iptables
	I1216 13:55:10.142602   23286 oci.go:144] the created container "ingress-addon-legacy-709000" has a running status.
	I1216 13:55:10.142650   23286 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa...
	I1216 13:55:10.262286   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1216 13:55:10.262357   23286 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 13:55:10.326345   23286 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 13:55:10.387960   23286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 13:55:10.387992   23286 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-709000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 13:55:10.494215   23286 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 13:55:10.551610   23286 machine.go:88] provisioning docker machine ...
	I1216 13:55:10.551655   23286 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-709000"
	I1216 13:55:10.551770   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:10.606576   23286 main.go:141] libmachine: Using SSH client type: native
	I1216 13:55:10.606938   23286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 56471 <nil> <nil>}
	I1216 13:55:10.606954   23286 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-709000 && echo "ingress-addon-legacy-709000" | sudo tee /etc/hostname
	I1216 13:55:10.754113   23286 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-709000
	
	I1216 13:55:10.754224   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:10.806440   23286 main.go:141] libmachine: Using SSH client type: native
	I1216 13:55:10.806724   23286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 56471 <nil> <nil>}
	I1216 13:55:10.806743   23286 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-709000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-709000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-709000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 13:55:10.942568   23286 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 13:55:10.942592   23286 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17806-19996/.minikube CaCertPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17806-19996/.minikube}
	I1216 13:55:10.942611   23286 ubuntu.go:177] setting up certificates
	I1216 13:55:10.942619   23286 provision.go:83] configureAuth start
	I1216 13:55:10.942694   23286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-709000
	I1216 13:55:10.994989   23286 provision.go:138] copyHostCerts
	I1216 13:55:10.995037   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.pem
	I1216 13:55:10.995094   23286 exec_runner.go:144] found /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.pem, removing ...
	I1216 13:55:10.995102   23286 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.pem
	I1216 13:55:10.995257   23286 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.pem (1078 bytes)
	I1216 13:55:10.995447   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cert.pem
	I1216 13:55:10.995475   23286 exec_runner.go:144] found /Users/jenkins/minikube-integration/17806-19996/.minikube/cert.pem, removing ...
	I1216 13:55:10.995480   23286 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17806-19996/.minikube/cert.pem
	I1216 13:55:10.995604   23286 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17806-19996/.minikube/cert.pem (1123 bytes)
	I1216 13:55:10.995776   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17806-19996/.minikube/key.pem
	I1216 13:55:10.995817   23286 exec_runner.go:144] found /Users/jenkins/minikube-integration/17806-19996/.minikube/key.pem, removing ...
	I1216 13:55:10.995824   23286 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17806-19996/.minikube/key.pem
	I1216 13:55:10.995909   23286 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17806-19996/.minikube/key.pem (1679 bytes)
	I1216 13:55:10.996091   23286 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-709000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-709000]
	I1216 13:55:11.151379   23286 provision.go:172] copyRemoteCerts
	I1216 13:55:11.151462   23286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 13:55:11.151568   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:11.203780   23286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:55:11.299793   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 13:55:11.299880   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 13:55:11.320677   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 13:55:11.320750   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1216 13:55:11.341531   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 13:55:11.341606   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 13:55:11.362267   23286 provision.go:86] duration metric: configureAuth took 419.627536ms
	I1216 13:55:11.362281   23286 ubuntu.go:193] setting minikube options for container-runtime
	I1216 13:55:11.362428   23286 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 13:55:11.362503   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:11.413722   23286 main.go:141] libmachine: Using SSH client type: native
	I1216 13:55:11.414082   23286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 56471 <nil> <nil>}
	I1216 13:55:11.414102   23286 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 13:55:11.554026   23286 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 13:55:11.554043   23286 ubuntu.go:71] root file system type: overlay
	I1216 13:55:11.554133   23286 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 13:55:11.554229   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:11.605718   23286 main.go:141] libmachine: Using SSH client type: native
	I1216 13:55:11.606029   23286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 56471 <nil> <nil>}
	I1216 13:55:11.606080   23286 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 13:55:11.752846   23286 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 13:55:11.752959   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:11.804606   23286 main.go:141] libmachine: Using SSH client type: native
	I1216 13:55:11.804902   23286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 56471 <nil> <nil>}
	I1216 13:55:11.804915   23286 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 13:55:12.400369   23286 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-16 21:55:11.750285720 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 13:55:12.400393   23286 machine.go:91] provisioned docker machine in 1.84873553s
	I1216 13:55:12.400412   23286 client.go:171] LocalClient.Create took 6.186078884s
	I1216 13:55:12.400433   23286 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-709000" took 6.186191161s
	I1216 13:55:12.400446   23286 start.go:300] post-start starting for "ingress-addon-legacy-709000" (driver="docker")
	I1216 13:55:12.400455   23286 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 13:55:12.401115   23286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 13:55:12.401192   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:12.455628   23286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:55:12.552913   23286 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 13:55:12.556912   23286 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 13:55:12.556935   23286 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 13:55:12.556944   23286 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 13:55:12.556951   23286 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1216 13:55:12.556963   23286 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17806-19996/.minikube/addons for local assets ...
	I1216 13:55:12.557061   23286 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17806-19996/.minikube/files for local assets ...
	I1216 13:55:12.557772   23286 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/204382.pem -> 204382.pem in /etc/ssl/certs
	I1216 13:55:12.557780   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/204382.pem -> /etc/ssl/certs/204382.pem
	I1216 13:55:12.557999   23286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 13:55:12.566407   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/204382.pem --> /etc/ssl/certs/204382.pem (1708 bytes)
	I1216 13:55:12.587620   23286 start.go:303] post-start completed in 187.163539ms
	I1216 13:55:12.588190   23286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-709000
	I1216 13:55:12.639758   23286 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/config.json ...
	I1216 13:55:12.640666   23286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 13:55:12.640741   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:12.692116   23286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:55:12.785341   23286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 13:55:12.790342   23286 start.go:128] duration metric: createHost completed in 6.598024222s
	I1216 13:55:12.790367   23286 start.go:83] releasing machines lock for "ingress-addon-legacy-709000", held for 6.598130814s
	I1216 13:55:12.790472   23286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-709000
	I1216 13:55:12.842432   23286 ssh_runner.go:195] Run: cat /version.json
	I1216 13:55:12.842505   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:12.843013   23286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 13:55:12.843328   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:12.898681   23286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:55:12.898677   23286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:55:13.095537   23286 ssh_runner.go:195] Run: systemctl --version
	I1216 13:55:13.100402   23286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 13:55:13.105403   23286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1216 13:55:13.128392   23286 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1216 13:55:13.128455   23286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 13:55:13.143473   23286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 13:55:13.158255   23286 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 13:55:13.158270   23286 start.go:475] detecting cgroup driver to use...
	I1216 13:55:13.158281   23286 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1216 13:55:13.158393   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 13:55:13.174134   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1216 13:55:13.183626   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 13:55:13.193373   23286 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 13:55:13.193429   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 13:55:13.202847   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 13:55:13.212370   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 13:55:13.222636   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 13:55:13.232220   23286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 13:55:13.241137   23286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 13:55:13.250583   23286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 13:55:13.258755   23286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 13:55:13.266989   23286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 13:55:13.322158   23286 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 13:55:13.403695   23286 start.go:475] detecting cgroup driver to use...
	I1216 13:55:13.403722   23286 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1216 13:55:13.403803   23286 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 13:55:13.421685   23286 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1216 13:55:13.421745   23286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 13:55:13.433616   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 13:55:13.451422   23286 ssh_runner.go:195] Run: which cri-dockerd
	I1216 13:55:13.455755   23286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 13:55:13.465172   23286 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1216 13:55:13.484998   23286 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 13:55:13.581591   23286 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 13:55:13.665397   23286 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 13:55:13.665501   23286 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 13:55:13.683437   23286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 13:55:13.766155   23286 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 13:55:14.012793   23286 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 13:55:14.037150   23286 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 13:55:14.084693   23286 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1216 13:55:14.084791   23286 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-709000 dig +short host.docker.internal
	I1216 13:55:14.201795   23286 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 13:55:14.202549   23286 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 13:55:14.207184   23286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 13:55:14.218250   23286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:55:14.270185   23286 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1216 13:55:14.270267   23286 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 13:55:14.289029   23286 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1216 13:55:14.289057   23286 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1216 13:55:14.289129   23286 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 13:55:14.297792   23286 ssh_runner.go:195] Run: which lz4
	I1216 13:55:14.301732   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1216 13:55:14.301980   23286 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 13:55:14.305957   23286 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 13:55:14.305987   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1216 13:55:20.282907   23286 docker.go:635] Took 5.981034 seconds to copy over tarball
	I1216 13:55:20.282961   23286 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 13:55:21.962954   23286 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.679946984s)
	I1216 13:55:21.962973   23286 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 13:55:22.008827   23286 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 13:55:22.017806   23286 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1216 13:55:22.034207   23286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 13:55:22.092076   23286 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 13:55:23.465460   23286 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.37334517s)
	I1216 13:55:23.465582   23286 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 13:55:23.485921   23286 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1216 13:55:23.485933   23286 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1216 13:55:23.485945   23286 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 13:55:23.490931   23286 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1216 13:55:23.491697   23286 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1216 13:55:23.491778   23286 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 13:55:23.491932   23286 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1216 13:55:23.492104   23286 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1216 13:55:23.492616   23286 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1216 13:55:23.492760   23286 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1216 13:55:23.492877   23286 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1216 13:55:23.498329   23286 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1216 13:55:23.498463   23286 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1216 13:55:23.498726   23286 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1216 13:55:23.498732   23286 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1216 13:55:23.498726   23286 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 13:55:23.498868   23286 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1216 13:55:23.499997   23286 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 13:55:23.500263   23286 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1216 13:55:25.313027   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1216 13:55:25.332730   23286 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1216 13:55:25.332775   23286 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1216 13:55:25.332833   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1216 13:55:25.350837   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1216 13:55:25.362470   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1216 13:55:25.382351   23286 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1216 13:55:25.382387   23286 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1216 13:55:25.382444   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1216 13:55:25.399247   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1216 13:55:25.404822   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1216 13:55:25.405333   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1216 13:55:25.420254   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1216 13:55:25.424855   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 13:55:25.425465   23286 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1216 13:55:25.425504   23286 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1216 13:55:25.425570   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1216 13:55:25.427640   23286 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1216 13:55:25.427668   23286 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1216 13:55:25.427732   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1216 13:55:25.429258   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1216 13:55:25.449196   23286 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1216 13:55:25.449229   23286 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1216 13:55:25.449327   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1216 13:55:25.454856   23286 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 13:55:25.454888   23286 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1216 13:55:25.454913   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1216 13:55:25.454956   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1216 13:55:25.454958   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1216 13:55:25.461118   23286 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1216 13:55:25.461159   23286 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1216 13:55:25.461265   23286 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1216 13:55:25.538008   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1216 13:55:25.538552   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 13:55:25.544538   23286 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1216 13:55:25.799394   23286 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 13:55:25.820449   23286 cache_images.go:92] LoadImages completed in 2.334460854s
	W1216 13:55:25.820498   23286 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1216 13:55:25.820571   23286 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 13:55:25.868949   23286 cni.go:84] Creating CNI manager for ""
	I1216 13:55:25.868966   23286 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 13:55:25.868980   23286 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1216 13:55:25.868996   23286 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-709000 NodeName:ingress-addon-legacy-709000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 13:55:25.869100   23286 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-709000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 13:55:25.869167   23286 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-709000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-709000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1216 13:55:25.869234   23286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1216 13:55:25.878508   23286 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 13:55:25.878560   23286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 13:55:25.886841   23286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1216 13:55:25.901865   23286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1216 13:55:25.917794   23286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1216 13:55:25.934473   23286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 13:55:25.938536   23286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 13:55:25.949207   23286 certs.go:56] Setting up /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000 for IP: 192.168.49.2
	I1216 13:55:25.949227   23286 certs.go:190] acquiring lock for shared ca certs: {Name:mk824d379c5f9ac3c94c9d3f970088baca10fd2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:25.949403   23286 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.key
	I1216 13:55:25.949500   23286 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17806-19996/.minikube/proxy-client-ca.key
	I1216 13:55:25.949549   23286 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/client.key
	I1216 13:55:25.949564   23286 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/client.crt with IP's: []
	I1216 13:55:26.148126   23286 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/client.crt ...
	I1216 13:55:26.148142   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/client.crt: {Name:mk6cde39b0e653f28fa80ad34d5d691abe7d72a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:26.149067   23286 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/client.key ...
	I1216 13:55:26.149085   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/client.key: {Name:mk5ea300175ecd4da5f8f94a5fd4b949839b52e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:26.150277   23286 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key.dd3b5fb2
	I1216 13:55:26.150301   23286 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1216 13:55:26.393405   23286 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt.dd3b5fb2 ...
	I1216 13:55:26.393420   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt.dd3b5fb2: {Name:mkb23eb106b18c8d0d3cce247d0a70df0ac2e9bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:26.393916   23286 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key.dd3b5fb2 ...
	I1216 13:55:26.393935   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key.dd3b5fb2: {Name:mk17e179068a9eebf785638797cefe4ac2ad9bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:26.394344   23286 certs.go:337] copying /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt
	I1216 13:55:26.394528   23286 certs.go:341] copying /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key
	I1216 13:55:26.394697   23286 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.key
	I1216 13:55:26.394715   23286 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.crt with IP's: []
	I1216 13:55:26.435520   23286 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.crt ...
	I1216 13:55:26.435530   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.crt: {Name:mkf48c6a00cd72993d6940fe622522053690358b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:26.436381   23286 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.key ...
	I1216 13:55:26.436390   23286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.key: {Name:mk88d14423094785491e7357a06f86e0c034a39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:55:26.436987   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 13:55:26.437015   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 13:55:26.437033   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 13:55:26.437050   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 13:55:26.437066   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 13:55:26.437083   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 13:55:26.437101   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 13:55:26.437120   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 13:55:26.437214   23286 certs.go:437] found cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/20438.pem (1338 bytes)
	W1216 13:55:26.437278   23286 certs.go:433] ignoring /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/20438_empty.pem, impossibly tiny 0 bytes
	I1216 13:55:26.437288   23286 certs.go:437] found cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 13:55:26.437325   23286 certs.go:437] found cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem (1078 bytes)
	I1216 13:55:26.437360   23286 certs.go:437] found cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem (1123 bytes)
	I1216 13:55:26.437388   23286 certs.go:437] found cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/certs/key.pem (1679 bytes)
	I1216 13:55:26.437451   23286 certs.go:437] found cert: /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/204382.pem (1708 bytes)
	I1216 13:55:26.437487   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 13:55:26.437505   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/20438.pem -> /usr/share/ca-certificates/20438.pem
	I1216 13:55:26.437521   23286 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/204382.pem -> /usr/share/ca-certificates/204382.pem
	I1216 13:55:26.437987   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1216 13:55:26.458783   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 13:55:26.480039   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 13:55:26.500757   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/ingress-addon-legacy-709000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 13:55:26.522272   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 13:55:26.543427   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 13:55:26.563692   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 13:55:26.585114   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 13:55:26.605912   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 13:55:26.627307   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/20438.pem --> /usr/share/ca-certificates/20438.pem (1338 bytes)
	I1216 13:55:26.647591   23286 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/ssl/certs/204382.pem --> /usr/share/ca-certificates/204382.pem (1708 bytes)
	I1216 13:55:26.668517   23286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 13:55:26.685111   23286 ssh_runner.go:195] Run: openssl version
	I1216 13:55:26.690601   23286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 13:55:26.699773   23286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 13:55:26.703973   23286 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 16 21:43 /usr/share/ca-certificates/minikubeCA.pem
	I1216 13:55:26.704020   23286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 13:55:26.710601   23286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 13:55:26.720134   23286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20438.pem && ln -fs /usr/share/ca-certificates/20438.pem /etc/ssl/certs/20438.pem"
	I1216 13:55:26.729867   23286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20438.pem
	I1216 13:55:26.734233   23286 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 16 21:49 /usr/share/ca-certificates/20438.pem
	I1216 13:55:26.734280   23286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20438.pem
	I1216 13:55:26.740875   23286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20438.pem /etc/ssl/certs/51391683.0"
	I1216 13:55:26.749766   23286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/204382.pem && ln -fs /usr/share/ca-certificates/204382.pem /etc/ssl/certs/204382.pem"
	I1216 13:55:26.758616   23286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/204382.pem
	I1216 13:55:26.763010   23286 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 16 21:49 /usr/share/ca-certificates/204382.pem
	I1216 13:55:26.763060   23286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/204382.pem
	I1216 13:55:26.769782   23286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/204382.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 13:55:26.779223   23286 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1216 13:55:26.783282   23286 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1216 13:55:26.783329   23286 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-709000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:55:26.783429   23286 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 13:55:26.801224   23286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 13:55:26.809785   23286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 13:55:26.818596   23286 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1216 13:55:26.818649   23286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 13:55:26.827977   23286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 13:55:26.828009   23286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 13:55:26.884994   23286 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1216 13:55:26.885033   23286 kubeadm.go:322] [preflight] Running pre-flight checks
	I1216 13:55:27.135861   23286 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 13:55:27.135966   23286 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 13:55:27.136048   23286 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 13:55:27.307691   23286 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 13:55:27.308241   23286 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 13:55:27.308289   23286 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1216 13:55:27.389782   23286 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 13:55:27.439124   23286 out.go:204]   - Generating certificates and keys ...
	I1216 13:55:27.439252   23286 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1216 13:55:27.439352   23286 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1216 13:55:27.754748   23286 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 13:55:27.934884   23286 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1216 13:55:28.014943   23286 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1216 13:55:28.318867   23286 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1216 13:55:28.459928   23286 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1216 13:55:28.460139   23286 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-709000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 13:55:28.559424   23286 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1216 13:55:28.559552   23286 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-709000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 13:55:28.621516   23286 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 13:55:29.014890   23286 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 13:55:29.151025   23286 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1216 13:55:29.151078   23286 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 13:55:29.357190   23286 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 13:55:29.475506   23286 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 13:55:29.510780   23286 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 13:55:29.654656   23286 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 13:55:29.655120   23286 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 13:55:29.676130   23286 out.go:204]   - Booting up control plane ...
	I1216 13:55:29.676348   23286 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 13:55:29.676492   23286 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 13:55:29.676638   23286 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 13:55:29.676770   23286 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 13:55:29.677082   23286 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 13:56:09.664857   23286 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1216 13:56:09.665456   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:56:09.665678   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:56:14.666793   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:56:14.667027   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:56:24.667746   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:56:24.667918   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:56:44.668866   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:56:44.669041   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:57:24.671689   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:57:24.671967   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:57:24.671983   23286 kubeadm.go:322] 
	I1216 13:57:24.672023   23286 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1216 13:57:24.672065   23286 kubeadm.go:322] 		timed out waiting for the condition
	I1216 13:57:24.672074   23286 kubeadm.go:322] 
	I1216 13:57:24.672116   23286 kubeadm.go:322] 	This error is likely caused by:
	I1216 13:57:24.672161   23286 kubeadm.go:322] 		- The kubelet is not running
	I1216 13:57:24.672294   23286 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 13:57:24.672313   23286 kubeadm.go:322] 
	I1216 13:57:24.672413   23286 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 13:57:24.672462   23286 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1216 13:57:24.672500   23286 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1216 13:57:24.672508   23286 kubeadm.go:322] 
	I1216 13:57:24.672647   23286 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 13:57:24.672781   23286 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 13:57:24.672795   23286 kubeadm.go:322] 
	I1216 13:57:24.672901   23286 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1216 13:57:24.672958   23286 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1216 13:57:24.673057   23286 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1216 13:57:24.673100   23286 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1216 13:57:24.673106   23286 kubeadm.go:322] 
	I1216 13:57:24.674764   23286 kubeadm.go:322] W1216 21:55:26.884055    1696 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1216 13:57:24.674937   23286 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1216 13:57:24.675012   23286 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1216 13:57:24.675128   23286 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1216 13:57:24.675220   23286 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 13:57:24.675323   23286 kubeadm.go:322] W1216 21:55:29.659455    1696 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1216 13:57:24.675429   23286 kubeadm.go:322] W1216 21:55:29.660171    1696 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1216 13:57:24.675499   23286 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 13:57:24.675575   23286 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1216 13:57:24.675666   23286 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-709000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-709000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:55:26.884055    1696 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:55:29.659455    1696 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:55:29.660171    1696 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-709000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-709000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:55:26.884055    1696 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:55:29.659455    1696 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:55:29.660171    1696 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 13:57:24.675719   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1216 13:57:25.101585   23286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 13:57:25.112088   23286 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1216 13:57:25.112142   23286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 13:57:25.120529   23286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 13:57:25.120559   23286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 13:57:25.172838   23286 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1216 13:57:25.172885   23286 kubeadm.go:322] [preflight] Running pre-flight checks
	I1216 13:57:25.409946   23286 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 13:57:25.410037   23286 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 13:57:25.410123   23286 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 13:57:25.579015   23286 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 13:57:25.579835   23286 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 13:57:25.579911   23286 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1216 13:57:25.651706   23286 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 13:57:25.673346   23286 out.go:204]   - Generating certificates and keys ...
	I1216 13:57:25.673414   23286 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1216 13:57:25.673480   23286 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1216 13:57:25.673580   23286 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 13:57:25.673629   23286 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1216 13:57:25.673675   23286 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 13:57:25.673737   23286 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1216 13:57:25.673835   23286 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1216 13:57:25.673886   23286 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1216 13:57:25.673951   23286 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 13:57:25.674053   23286 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 13:57:25.674112   23286 kubeadm.go:322] [certs] Using the existing "sa" key
	I1216 13:57:25.674164   23286 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 13:57:25.833299   23286 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 13:57:26.082119   23286 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 13:57:26.278137   23286 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 13:57:26.559857   23286 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 13:57:26.560423   23286 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 13:57:26.582232   23286 out.go:204]   - Booting up control plane ...
	I1216 13:57:26.582405   23286 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 13:57:26.582555   23286 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 13:57:26.582721   23286 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 13:57:26.582871   23286 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 13:57:26.583149   23286 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 13:58:06.570671   23286 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1216 13:58:06.571649   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:58:06.571932   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:58:11.573307   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:58:11.573539   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:58:21.574391   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:58:21.574633   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:58:41.576425   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:58:41.576704   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:59:21.578713   23286 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 13:59:21.578970   23286 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 13:59:21.578987   23286 kubeadm.go:322] 
	I1216 13:59:21.579050   23286 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1216 13:59:21.579146   23286 kubeadm.go:322] 		timed out waiting for the condition
	I1216 13:59:21.579161   23286 kubeadm.go:322] 
	I1216 13:59:21.579197   23286 kubeadm.go:322] 	This error is likely caused by:
	I1216 13:59:21.579244   23286 kubeadm.go:322] 		- The kubelet is not running
	I1216 13:59:21.579421   23286 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 13:59:21.579441   23286 kubeadm.go:322] 
	I1216 13:59:21.579632   23286 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 13:59:21.579674   23286 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1216 13:59:21.579706   23286 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1216 13:59:21.579714   23286 kubeadm.go:322] 
	I1216 13:59:21.579810   23286 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 13:59:21.579884   23286 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 13:59:21.579891   23286 kubeadm.go:322] 
	I1216 13:59:21.579979   23286 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1216 13:59:21.580053   23286 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1216 13:59:21.580141   23286 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1216 13:59:21.580185   23286 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1216 13:59:21.580194   23286 kubeadm.go:322] 
	I1216 13:59:21.581807   23286 kubeadm.go:322] W1216 21:57:25.171956    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1216 13:59:21.581970   23286 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1216 13:59:21.582049   23286 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1216 13:59:21.582159   23286 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1216 13:59:21.582266   23286 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 13:59:21.582411   23286 kubeadm.go:322] W1216 21:57:26.565252    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1216 13:59:21.582516   23286 kubeadm.go:322] W1216 21:57:26.566021    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1216 13:59:21.582586   23286 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 13:59:21.582652   23286 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1216 13:59:21.582678   23286 kubeadm.go:406] StartCluster complete in 3m54.796261839s
	I1216 13:59:21.582779   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 13:59:21.600439   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.600453   23286 logs.go:286] No container was found matching "kube-apiserver"
	I1216 13:59:21.600521   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 13:59:21.619708   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.619729   23286 logs.go:286] No container was found matching "etcd"
	I1216 13:59:21.619799   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 13:59:21.637854   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.637868   23286 logs.go:286] No container was found matching "coredns"
	I1216 13:59:21.637957   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 13:59:21.655649   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.655664   23286 logs.go:286] No container was found matching "kube-scheduler"
	I1216 13:59:21.655739   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 13:59:21.674577   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.674602   23286 logs.go:286] No container was found matching "kube-proxy"
	I1216 13:59:21.674684   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 13:59:21.692547   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.692561   23286 logs.go:286] No container was found matching "kube-controller-manager"
	I1216 13:59:21.692638   23286 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 13:59:21.711331   23286 logs.go:284] 0 containers: []
	W1216 13:59:21.711345   23286 logs.go:286] No container was found matching "kindnet"
	I1216 13:59:21.711353   23286 logs.go:123] Gathering logs for kubelet ...
	I1216 13:59:21.711360   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 13:59:21.747127   23286 logs.go:123] Gathering logs for dmesg ...
	I1216 13:59:21.747141   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 13:59:21.759617   23286 logs.go:123] Gathering logs for describe nodes ...
	I1216 13:59:21.759636   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 13:59:21.811663   23286 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 13:59:21.811676   23286 logs.go:123] Gathering logs for Docker ...
	I1216 13:59:21.811685   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 13:59:21.827659   23286 logs.go:123] Gathering logs for container status ...
	I1216 13:59:21.827674   23286 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 13:59:21.884221   23286 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:57:25.171956    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:57:26.565252    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:57:26.566021    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 13:59:21.884245   23286 out.go:239] * 
	* 
	W1216 13:59:21.884291   23286 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:57:25.171956    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:57:26.565252    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:57:26.566021    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:57:25.171956    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:57:26.565252    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:57:26.566021    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 13:59:21.884322   23286 out.go:239] * 
	* 
	W1216 13:59:21.884989   23286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 13:59:21.947584   23286 out.go:177] 
	W1216 13:59:22.010739   23286 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:57:25.171956    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:57:26.565252    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:57:26.566021    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1216 21:57:25.171956    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 21:57:26.565252    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1216 21:57:26.566021    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 13:59:22.010828   23286 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 13:59:22.010877   23286 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 13:59:22.053688   23286 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-709000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (263.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (101.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-709000 addons enable ingress --alsologtostderr -v=5
E1216 14:00:11.414026   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-709000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m41.010329003s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 13:59:22.190137   23455 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:59:22.190939   23455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:59:22.190945   23455 out.go:309] Setting ErrFile to fd 2...
	I1216 13:59:22.190949   23455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:59:22.191137   23455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 13:59:22.191502   23455 mustload.go:65] Loading cluster: ingress-addon-legacy-709000
	I1216 13:59:22.191805   23455 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 13:59:22.191822   23455 addons.go:594] checking whether the cluster is paused
	I1216 13:59:22.191901   23455 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 13:59:22.191916   23455 host.go:66] Checking if "ingress-addon-legacy-709000" exists ...
	I1216 13:59:22.192399   23455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 13:59:22.243460   23455 ssh_runner.go:195] Run: systemctl --version
	I1216 13:59:22.243557   23455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:59:22.295252   23455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:59:22.388294   23455 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 13:59:22.427695   23455 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1216 13:59:22.449387   23455 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 13:59:22.449417   23455 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-709000"
	I1216 13:59:22.449451   23455 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-709000"
	I1216 13:59:22.449491   23455 host.go:66] Checking if "ingress-addon-legacy-709000" exists ...
	I1216 13:59:22.449928   23455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 13:59:22.525234   23455 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1216 13:59:22.547062   23455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1216 13:59:22.568810   23455 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1216 13:59:22.590745   23455 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1216 13:59:22.611847   23455 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 13:59:22.611866   23455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1216 13:59:22.611946   23455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 13:59:22.665151   23455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 13:59:22.768053   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:22.831980   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:22.832005   23455 retry.go:31] will retry after 160.322804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:22.992763   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:23.064345   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:23.064368   23455 retry.go:31] will retry after 398.914836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:23.463604   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:23.532479   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:23.532500   23455 retry.go:31] will retry after 462.984416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:23.996391   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:24.051036   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:24.051056   23455 retry.go:31] will retry after 910.448732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:24.961897   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:25.014151   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:25.014175   23455 retry.go:31] will retry after 662.457343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:25.677762   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:25.766538   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:25.766561   23455 retry.go:31] will retry after 1.587109258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:27.353833   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:27.437810   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:27.437828   23455 retry.go:31] will retry after 2.407869331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:29.847085   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:29.897215   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:29.897234   23455 retry.go:31] will retry after 2.971526202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:32.869763   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:32.931942   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:32.931959   23455 retry.go:31] will retry after 4.813479722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:37.746820   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:37.796065   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:37.796083   23455 retry.go:31] will retry after 8.85236087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:46.650152   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:46.708929   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:46.708949   23455 retry.go:31] will retry after 10.016284874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:56.726552   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 13:59:56.777713   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 13:59:56.777747   23455 retry.go:31] will retry after 26.096618728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:00:22.875599   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 14:00:22.936140   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:00:22.936155   23455 retry.go:31] will retry after 40.034492588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:02.973456   23455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1216 14:01:03.023857   23455 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:03.023888   23455 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-709000"
	I1216 14:01:03.046664   23455 out.go:177] * Verifying ingress addon...
	I1216 14:01:03.069691   23455 out.go:177] 
	W1216 14:01:03.092190   23455 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-709000" does not exist: client config: context "ingress-addon-legacy-709000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-709000" does not exist: client config: context "ingress-addon-legacy-709000" does not exist]
	W1216 14:01:03.092208   23455 out.go:239] * 
	* 
	W1216 14:01:03.100991   23455 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 14:01:03.122313   23455 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-709000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-709000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd",
	        "Created": "2023-12-16T21:55:09.679338234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482104,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-16T21:55:09.886395832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:455aa6a0dac2432f38142b6f5a4061c13472373a16ab9a2802b752c8627214c2",
	        "ResolvConfPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/hosts",
	        "LogPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd-json.log",
	        "Name": "/ingress-addon-legacy-709000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-709000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-709000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394-init/diff:/var/lib/docker/overlay2/1c976a79932806a3881e14b9c780dba8e119bab692e4983e4e1e079dba742c9b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-709000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-709000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-709000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-709000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-709000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13ba12e0740ba3ba2fcab8f91034c6c91480c2c64f711d86b7df0794cc1e194f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56474"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56475"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13ba12e0740b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-709000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5f092287092b",
	                        "ingress-addon-legacy-709000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "27f29922b8026a506879f4cc8875f0e6b4f5526427318b5a2bef509460c6f21a",
	                    "EndpointID": "0aad1342d9dd2d9535565ed0a07418f891e41befcaa688902c269c1e8e18b048",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-709000 -n ingress-addon-legacy-709000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-709000 -n ingress-addon-legacy-709000: exit status 6 (380.003525ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:01:03.568534   23499 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-709000" does not appear in /Users/jenkins/minikube-integration/17806-19996/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-709000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (101.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (113.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-709000 addons enable ingress-dns --alsologtostderr -v=5
E1216 14:01:56.728668   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:02:27.565505   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:02:55.257923   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-709000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m52.673039846s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:01:03.634856   23509 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:01:03.636156   23509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:01:03.636163   23509 out.go:309] Setting ErrFile to fd 2...
	I1216 14:01:03.636167   23509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:01:03.636349   23509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:01:03.636707   23509 mustload.go:65] Loading cluster: ingress-addon-legacy-709000
	I1216 14:01:03.637012   23509 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 14:01:03.637029   23509 addons.go:594] checking whether the cluster is paused
	I1216 14:01:03.637108   23509 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 14:01:03.637123   23509 host.go:66] Checking if "ingress-addon-legacy-709000" exists ...
	I1216 14:01:03.637605   23509 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 14:01:03.688290   23509 ssh_runner.go:195] Run: systemctl --version
	I1216 14:01:03.688383   23509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 14:01:03.739180   23509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 14:01:03.831471   23509 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 14:01:03.874328   23509 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1216 14:01:03.895289   23509 config.go:182] Loaded profile config "ingress-addon-legacy-709000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1216 14:01:03.895310   23509 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-709000"
	I1216 14:01:03.895319   23509 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-709000"
	I1216 14:01:03.895365   23509 host.go:66] Checking if "ingress-addon-legacy-709000" exists ...
	I1216 14:01:03.895793   23509 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-709000 --format={{.State.Status}}
	I1216 14:01:03.969136   23509 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1216 14:01:03.990101   23509 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1216 14:01:04.011228   23509 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 14:01:04.011243   23509 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1216 14:01:04.011319   23509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-709000
	I1216 14:01:04.061760   23509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56471 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/ingress-addon-legacy-709000/id_rsa Username:docker}
	I1216 14:01:04.166145   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:04.215622   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:04.215660   23509 retry.go:31] will retry after 140.65334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:04.358123   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:04.417300   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:04.417328   23509 retry.go:31] will retry after 354.566618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:04.772823   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:04.828943   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:04.828961   23509 retry.go:31] will retry after 383.139154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:05.213744   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:05.271847   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:05.271865   23509 retry.go:31] will retry after 693.155965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:05.965314   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:06.013739   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:06.013758   23509 retry.go:31] will retry after 1.022495057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:07.038266   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:07.092127   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:07.092145   23509 retry.go:31] will retry after 1.395635101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:08.490150   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:08.560566   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:08.560584   23509 retry.go:31] will retry after 2.168710693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:10.729515   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:10.775809   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:10.775826   23509 retry.go:31] will retry after 3.444168755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:14.222368   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:14.287137   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:14.287161   23509 retry.go:31] will retry after 9.364338986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:23.652569   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:23.711556   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:23.711574   23509 retry.go:31] will retry after 8.377682161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:32.090993   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:32.141510   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:32.141527   23509 retry.go:31] will retry after 16.457470323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:48.599923   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:01:48.650118   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:01:48.657905   23509 retry.go:31] will retry after 28.372007998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:02:17.031039   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:02:17.091567   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:02:17.091593   23509 retry.go:31] will retry after 38.999309981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:02:56.091788   23509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1216 14:02:56.162500   23509 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1216 14:02:56.184285   23509 out.go:177] 
	W1216 14:02:56.204901   23509 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1216 14:02:56.204929   23509 out.go:239] * 
	* 
	W1216 14:02:56.209629   23509 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 14:02:56.230970   23509 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-709000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-709000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd",
	        "Created": "2023-12-16T21:55:09.679338234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482104,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-16T21:55:09.886395832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:455aa6a0dac2432f38142b6f5a4061c13472373a16ab9a2802b752c8627214c2",
	        "ResolvConfPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/hosts",
	        "LogPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd-json.log",
	        "Name": "/ingress-addon-legacy-709000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-709000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-709000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394-init/diff:/var/lib/docker/overlay2/1c976a79932806a3881e14b9c780dba8e119bab692e4983e4e1e079dba742c9b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-709000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-709000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-709000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-709000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-709000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13ba12e0740ba3ba2fcab8f91034c6c91480c2c64f711d86b7df0794cc1e194f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56474"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56475"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13ba12e0740b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-709000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5f092287092b",
	                        "ingress-addon-legacy-709000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "27f29922b8026a506879f4cc8875f0e6b4f5526427318b5a2bef509460c6f21a",
	                    "EndpointID": "0aad1342d9dd2d9535565ed0a07418f891e41befcaa688902c269c1e8e18b048",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-709000 -n ingress-addon-legacy-709000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-709000 -n ingress-addon-legacy-709000: exit status 6 (379.940083ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:02:56.677411   23545 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-709000" does not appear in /Users/jenkins/minikube-integration/17806-19996/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-709000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (113.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:200: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-709000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-709000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd",
	        "Created": "2023-12-16T21:55:09.679338234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482104,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-16T21:55:09.886395832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:455aa6a0dac2432f38142b6f5a4061c13472373a16ab9a2802b752c8627214c2",
	        "ResolvConfPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/hosts",
	        "LogPath": "/var/lib/docker/containers/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd/5f092287092b8a61da13aae119284a6ddc4161505d231f685ba4f94f717c4dfd-json.log",
	        "Name": "/ingress-addon-legacy-709000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-709000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-709000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394-init/diff:/var/lib/docker/overlay2/1c976a79932806a3881e14b9c780dba8e119bab692e4983e4e1e079dba742c9b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4a4991588243d45cc80477635f5fd3347e890f7f555f375ea85dac5956a0394/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-709000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-709000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-709000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-709000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-709000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13ba12e0740ba3ba2fcab8f91034c6c91480c2c64f711d86b7df0794cc1e194f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56474"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56475"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13ba12e0740b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-709000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5f092287092b",
	                        "ingress-addon-legacy-709000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "27f29922b8026a506879f4cc8875f0e6b4f5526427318b5a2bef509460c6f21a",
	                    "EndpointID": "0aad1342d9dd2d9535565ed0a07418f891e41befcaa688902c269c1e8e18b048",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-709000 -n ingress-addon-legacy-709000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-709000 -n ingress-addon-legacy-709000: exit status 6 (386.141747ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:02:57.115972   23557 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-709000" does not appear in /Users/jenkins/minikube-integration/17806-19996/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-709000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (885.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-381000 ssh -- ls /minikube-host
E1216 14:07:27.570833   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:08:19.781212   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:11:56.736252   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:12:27.574551   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:13:50.630189   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:16:56.740826   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:17:27.579182   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-381000 ssh -- ls /minikube-host: signal: killed (14m45.015635453s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-381000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-381000
helpers_test.go:235: (dbg) docker inspect mount-start-2-381000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db7028294755a5205f40d27f9b6b924489b028958032fb538f975fec0e7d16e8",
	        "Created": "2023-12-16T22:06:54.80643793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 529899,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-16T22:06:55.047843423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:455aa6a0dac2432f38142b6f5a4061c13472373a16ab9a2802b752c8627214c2",
	        "ResolvConfPath": "/var/lib/docker/containers/db7028294755a5205f40d27f9b6b924489b028958032fb538f975fec0e7d16e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db7028294755a5205f40d27f9b6b924489b028958032fb538f975fec0e7d16e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/db7028294755a5205f40d27f9b6b924489b028958032fb538f975fec0e7d16e8/hosts",
	        "LogPath": "/var/lib/docker/containers/db7028294755a5205f40d27f9b6b924489b028958032fb538f975fec0e7d16e8/db7028294755a5205f40d27f9b6b924489b028958032fb538f975fec0e7d16e8-json.log",
	        "Name": "/mount-start-2-381000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "mount-start-2-381000:/var",
	                "/host_mnt/Users:/minikube-host",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-381000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16c2649cd95886c6d01e3d6da5e5cbc15d5fbefe4d1e6359e01c1d13ded96867-init/diff:/var/lib/docker/overlay2/1c976a79932806a3881e14b9c780dba8e119bab692e4983e4e1e079dba742c9b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c2649cd95886c6d01e3d6da5e5cbc15d5fbefe4d1e6359e01c1d13ded96867/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c2649cd95886c6d01e3d6da5e5cbc15d5fbefe4d1e6359e01c1d13ded96867/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c2649cd95886c6d01e3d6da5e5cbc15d5fbefe4d1e6359e01c1d13ded96867/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-381000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-381000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-381000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-381000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-381000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba05a506b763f6539daf0cf1a2781e7a5576482e51b3da717d58fd302658282b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56763"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56764"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56765"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56766"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56767"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ba05a506b763",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-381000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "db7028294755",
	                        "mount-start-2-381000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "95db4d9156e46961b669e0925dbe00c69a1a0680b25fec6e1796a8a87dc10c0a",
	                    "EndpointID": "753bfad684432dbb8cd8dba39983134734fd3f4d651b8a5d9e1ee5bd36c510a6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-381000 -n mount-start-2-381000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-381000 -n mount-start-2-381000: exit status 6 (380.167054ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:21:45.879878   25308 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-381000" does not appear in /Users/jenkins/minikube-integration/17806-19996/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-381000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (885.45s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (754.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-774000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1216 14:24:59.798825   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:26:56.751219   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:27:27.588501   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:30:30.646767   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:31:56.754502   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:32:27.592884   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-774000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m34.149135533s)

                                                
                                                
-- stdout --
	* [multinode-774000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-774000 in cluster multinode-774000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-774000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:22:56.562030   25428 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:22:56.562311   25428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:22:56.562318   25428 out.go:309] Setting ErrFile to fd 2...
	I1216 14:22:56.562322   25428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:22:56.562501   25428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:22:56.563961   25428 out.go:303] Setting JSON to false
	I1216 14:22:56.586504   25428 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8545,"bootTime":1702756831,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 14:22:56.586615   25428 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 14:22:56.608149   25428 out.go:177] * [multinode-774000] minikube v1.32.0 on Darwin 14.2
	I1216 14:22:56.649995   25428 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 14:22:56.650120   25428 notify.go:220] Checking for updates...
	I1216 14:22:56.692963   25428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 14:22:56.735061   25428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 14:22:56.756105   25428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 14:22:56.776992   25428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 14:22:56.798046   25428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 14:22:56.819293   25428 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 14:22:56.876315   25428 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 14:22:56.876487   25428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 14:22:56.977584   25428 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-16 22:22:56.966607298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 14:22:56.999292   25428 out.go:177] * Using the docker driver based on user configuration
	I1216 14:22:57.041209   25428 start.go:298] selected driver: docker
	I1216 14:22:57.041257   25428 start.go:902] validating driver "docker" against <nil>
	I1216 14:22:57.041274   25428 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 14:22:57.045746   25428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 14:22:57.146695   25428 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-16 22:22:57.136257267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 14:22:57.146879   25428 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 14:22:57.147072   25428 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 14:22:57.168481   25428 out.go:177] * Using Docker Desktop driver with root privileges
	I1216 14:22:57.189572   25428 cni.go:84] Creating CNI manager for ""
	I1216 14:22:57.189606   25428 cni.go:136] 0 nodes found, recommending kindnet
	I1216 14:22:57.189620   25428 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 14:22:57.189643   25428 start_flags.go:323] config:
	{Name:multinode-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-774000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 14:22:57.232502   25428 out.go:177] * Starting control plane node multinode-774000 in cluster multinode-774000
	I1216 14:22:57.253590   25428 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 14:22:57.274361   25428 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 14:22:57.316498   25428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:22:57.316597   25428 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 14:22:57.316593   25428 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 14:22:57.316611   25428 cache.go:56] Caching tarball of preloaded images
	I1216 14:22:57.316836   25428 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 14:22:57.316861   25428 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 14:22:57.318348   25428 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/multinode-774000/config.json ...
	I1216 14:22:57.318484   25428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/multinode-774000/config.json: {Name:mka978cd18ca2cdce9bf18f1a9a398e49b90ca2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 14:22:57.368874   25428 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 14:22:57.368891   25428 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 14:22:57.368911   25428 cache.go:194] Successfully downloaded all kic artifacts
	I1216 14:22:57.368960   25428 start.go:365] acquiring machines lock for multinode-774000: {Name:mkbfbdd77472705ce76cfd99f9e1c31146413090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 14:22:57.369108   25428 start.go:369] acquired machines lock for "multinode-774000" in 137.332µs
	I1216 14:22:57.369132   25428 start.go:93] Provisioning new machine with config: &{Name:multinode-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-774000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 14:22:57.369204   25428 start.go:125] createHost starting for "" (driver="docker")
	I1216 14:22:57.411424   25428 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1216 14:22:57.411815   25428 start.go:159] libmachine.API.Create for "multinode-774000" (driver="docker")
	I1216 14:22:57.411867   25428 client.go:168] LocalClient.Create starting
	I1216 14:22:57.412067   25428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 14:22:57.412164   25428 main.go:141] libmachine: Decoding PEM data...
	I1216 14:22:57.412202   25428 main.go:141] libmachine: Parsing certificate...
	I1216 14:22:57.412301   25428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 14:22:57.412372   25428 main.go:141] libmachine: Decoding PEM data...
	I1216 14:22:57.412388   25428 main.go:141] libmachine: Parsing certificate...
	I1216 14:22:57.413282   25428 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 14:22:57.464780   25428 cli_runner.go:211] docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 14:22:57.464870   25428 network_create.go:281] running [docker network inspect multinode-774000] to gather additional debugging logs...
	I1216 14:22:57.464886   25428 cli_runner.go:164] Run: docker network inspect multinode-774000
	W1216 14:22:57.515029   25428 cli_runner.go:211] docker network inspect multinode-774000 returned with exit code 1
	I1216 14:22:57.515056   25428 network_create.go:284] error running [docker network inspect multinode-774000]: docker network inspect multinode-774000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-774000 not found
	I1216 14:22:57.515067   25428 network_create.go:286] output of [docker network inspect multinode-774000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-774000 not found
	
	** /stderr **
	I1216 14:22:57.515197   25428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:22:57.567243   25428 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:22:57.567651   25428 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f51c0}
	I1216 14:22:57.567668   25428 network_create.go:124] attempt to create docker network multinode-774000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1216 14:22:57.567748   25428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	I1216 14:22:57.654027   25428 network_create.go:108] docker network multinode-774000 192.168.58.0/24 created
	I1216 14:22:57.654068   25428 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-774000" container
	I1216 14:22:57.654197   25428 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 14:22:57.705230   25428 cli_runner.go:164] Run: docker volume create multinode-774000 --label name.minikube.sigs.k8s.io=multinode-774000 --label created_by.minikube.sigs.k8s.io=true
	I1216 14:22:57.756716   25428 oci.go:103] Successfully created a docker volume multinode-774000
	I1216 14:22:57.756830   25428 cli_runner.go:164] Run: docker run --rm --name multinode-774000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-774000 --entrypoint /usr/bin/test -v multinode-774000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 14:22:58.133610   25428 oci.go:107] Successfully prepared a docker volume multinode-774000
	I1216 14:22:58.133649   25428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:22:58.133666   25428 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 14:22:58.133778   25428 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-774000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 14:28:57.418895   25428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:28:57.419031   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:57.473628   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:28:57.473756   25428 retry.go:31] will retry after 188.427797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:28:57.662721   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:57.715562   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:28:57.715670   25428 retry.go:31] will retry after 198.771395ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:28:57.915338   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:57.968748   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:28:57.968857   25428 retry.go:31] will retry after 710.590481ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:28:58.680337   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:58.734618   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:28:58.734719   25428 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:28:58.734735   25428 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:28:58.734797   25428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:28:58.734857   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:58.787149   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:28:58.787243   25428 retry.go:31] will retry after 309.995534ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:28:59.098187   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:59.150260   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:28:59.150354   25428 retry.go:31] will retry after 291.576318ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:28:59.443482   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:28:59.497410   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:28:59.497524   25428 retry.go:31] will retry after 604.141464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:00.103222   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:29:00.154437   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:29:00.154537   25428 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:29:00.154569   25428 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:00.154587   25428 start.go:128] duration metric: createHost completed in 6m2.779851088s
	I1216 14:29:00.154594   25428 start.go:83] releasing machines lock for "multinode-774000", held for 6m2.779955748s
	W1216 14:29:00.154607   25428 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1216 14:29:00.155036   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:00.204941   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:00.205002   25428 delete.go:82] Unable to get host status for multinode-774000, assuming it has already been deleted: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	W1216 14:29:00.205101   25428 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1216 14:29:00.205110   25428 start.go:709] Will try again in 5 seconds ...
	I1216 14:29:05.206953   25428 start.go:365] acquiring machines lock for multinode-774000: {Name:mkbfbdd77472705ce76cfd99f9e1c31146413090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 14:29:05.208084   25428 start.go:369] acquired machines lock for "multinode-774000" in 155.899µs
	I1216 14:29:05.208118   25428 start.go:96] Skipping create...Using existing machine configuration
	I1216 14:29:05.208133   25428 fix.go:54] fixHost starting: 
	I1216 14:29:05.208667   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:05.262311   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:05.262363   25428 fix.go:102] recreateIfNeeded on multinode-774000: state= err=unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:05.262383   25428 fix.go:107] machineExists: false. err=machine does not exist
	I1216 14:29:05.283918   25428 out.go:177] * docker "multinode-774000" container is missing, will recreate.
	I1216 14:29:05.325628   25428 delete.go:124] DEMOLISHING multinode-774000 ...
	I1216 14:29:05.325815   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:05.378005   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:29:05.378058   25428 stop.go:75] unable to get state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:05.378081   25428 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:05.378484   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:05.428161   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:05.428208   25428 delete.go:82] Unable to get host status for multinode-774000, assuming it has already been deleted: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:05.428296   25428 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:29:05.477789   25428 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:29:05.477821   25428 kic.go:371] could not find the container multinode-774000 to remove it. will try anyways
	I1216 14:29:05.477912   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:05.528402   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:29:05.528453   25428 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:05.528536   25428 cli_runner.go:164] Run: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0"
	W1216 14:29:05.578303   25428 cli_runner.go:211] docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 14:29:05.578335   25428 oci.go:650] error shutdown multinode-774000: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:06.580740   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:06.647918   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:06.647962   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:06.647973   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:06.648005   25428 retry.go:31] will retry after 452.034818ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:07.101021   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:07.155258   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:07.155328   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:07.155342   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:07.155365   25428 retry.go:31] will retry after 663.021944ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:07.820744   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:07.876872   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:07.876934   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:07.876949   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:07.876979   25428 retry.go:31] will retry after 1.593358624s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:09.472719   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:09.525125   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:09.525168   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:09.525181   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:09.525205   25428 retry.go:31] will retry after 1.237056732s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:10.762571   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:10.814903   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:10.814949   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:10.814964   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:10.814989   25428 retry.go:31] will retry after 1.799167543s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:12.614587   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:12.666871   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:12.666927   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:12.666936   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:12.666958   25428 retry.go:31] will retry after 3.479627231s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:16.147641   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:16.201145   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:16.201191   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:16.201202   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:16.201227   25428 retry.go:31] will retry after 7.271820306s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:23.475472   25428 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:29:23.528172   25428 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:29:23.528216   25428 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:29:23.528233   25428 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:29:23.528263   25428 oci.go:88] couldn't shut down multinode-774000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	 
	I1216 14:29:23.528346   25428 cli_runner.go:164] Run: docker rm -f -v multinode-774000
	I1216 14:29:23.578739   25428 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:29:23.629021   25428 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:29:23.629138   25428 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:29:23.679776   25428 cli_runner.go:164] Run: docker network rm multinode-774000
	I1216 14:29:23.784662   25428 fix.go:114] Sleeping 1 second for extra luck!
	I1216 14:29:24.786946   25428 start.go:125] createHost starting for "" (driver="docker")
	I1216 14:29:24.810066   25428 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1216 14:29:24.810226   25428 start.go:159] libmachine.API.Create for "multinode-774000" (driver="docker")
	I1216 14:29:24.810259   25428 client.go:168] LocalClient.Create starting
	I1216 14:29:24.810480   25428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 14:29:24.810575   25428 main.go:141] libmachine: Decoding PEM data...
	I1216 14:29:24.810599   25428 main.go:141] libmachine: Parsing certificate...
	I1216 14:29:24.810695   25428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 14:29:24.810773   25428 main.go:141] libmachine: Decoding PEM data...
	I1216 14:29:24.810789   25428 main.go:141] libmachine: Parsing certificate...
	I1216 14:29:24.811504   25428 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 14:29:24.865470   25428 cli_runner.go:211] docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 14:29:24.865567   25428 network_create.go:281] running [docker network inspect multinode-774000] to gather additional debugging logs...
	I1216 14:29:24.865589   25428 cli_runner.go:164] Run: docker network inspect multinode-774000
	W1216 14:29:24.916291   25428 cli_runner.go:211] docker network inspect multinode-774000 returned with exit code 1
	I1216 14:29:24.916323   25428 network_create.go:284] error running [docker network inspect multinode-774000]: docker network inspect multinode-774000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-774000 not found
	I1216 14:29:24.916336   25428 network_create.go:286] output of [docker network inspect multinode-774000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-774000 not found
	
	** /stderr **
	I1216 14:29:24.916487   25428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:29:24.968653   25428 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:29:24.970277   25428 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:29:24.970633   25428 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f56d0}
	I1216 14:29:24.970652   25428 network_create.go:124] attempt to create docker network multinode-774000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1216 14:29:24.970730   25428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	W1216 14:29:25.021381   25428 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000 returned with exit code 1
	W1216 14:29:25.021423   25428 network_create.go:149] failed to create docker network multinode-774000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1216 14:29:25.021442   25428 network_create.go:116] failed to create docker network multinode-774000 192.168.67.0/24, will retry: subnet is taken
	I1216 14:29:25.022931   25428 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:29:25.024011   25428 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006937c0}
	I1216 14:29:25.024028   25428 network_create.go:124] attempt to create docker network multinode-774000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1216 14:29:25.024098   25428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	I1216 14:29:25.111820   25428 network_create.go:108] docker network multinode-774000 192.168.76.0/24 created
	I1216 14:29:25.111853   25428 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-774000" container
	I1216 14:29:25.111968   25428 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 14:29:25.163134   25428 cli_runner.go:164] Run: docker volume create multinode-774000 --label name.minikube.sigs.k8s.io=multinode-774000 --label created_by.minikube.sigs.k8s.io=true
	I1216 14:29:25.213676   25428 oci.go:103] Successfully created a docker volume multinode-774000
	I1216 14:29:25.213816   25428 cli_runner.go:164] Run: docker run --rm --name multinode-774000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-774000 --entrypoint /usr/bin/test -v multinode-774000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 14:29:25.498893   25428 oci.go:107] Successfully prepared a docker volume multinode-774000
	I1216 14:29:25.498927   25428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:29:25.498940   25428 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 14:29:25.499050   25428 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-774000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 14:35:24.815986   25428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:35:24.816077   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:24.870471   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:24.870587   25428 retry.go:31] will retry after 248.725648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:25.120209   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:25.171976   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:25.172073   25428 retry.go:31] will retry after 206.309932ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:25.379378   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:25.431931   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:25.432037   25428 retry.go:31] will retry after 601.795369ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:26.034746   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:26.086571   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:35:26.086675   25428 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:35:26.086690   25428 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:26.086761   25428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:35:26.086818   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:26.136926   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:26.137031   25428 retry.go:31] will retry after 315.319888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:26.452789   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:26.508059   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:26.508194   25428 retry.go:31] will retry after 191.763501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:26.702314   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:26.757357   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:26.757450   25428 retry.go:31] will retry after 597.457176ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:27.356104   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:27.408585   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:35:27.408686   25428 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:35:27.408701   25428 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:27.408717   25428 start.go:128] duration metric: createHost completed in 6m2.61622801s
	I1216 14:35:27.408797   25428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:35:27.408858   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:27.458998   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:27.459084   25428 retry.go:31] will retry after 262.506689ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:27.722005   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:27.776420   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:27.776517   25428 retry.go:31] will retry after 443.965257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:28.220778   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:28.276034   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:28.276139   25428 retry.go:31] will retry after 807.636116ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:29.084224   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:29.136260   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:35:29.136360   25428 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:35:29.136376   25428 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:29.136437   25428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:35:29.136503   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:29.186748   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:29.186839   25428 retry.go:31] will retry after 159.171076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:29.346275   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:29.397693   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:29.397822   25428 retry.go:31] will retry after 528.778405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:29.927104   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:29.979432   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:35:29.979521   25428 retry.go:31] will retry after 503.524442ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:30.483589   25428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:35:30.536504   25428 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:35:30.536613   25428 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:35:30.536628   25428 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:35:30.536640   25428 fix.go:56] fixHost completed within 6m25.32264597s
	I1216 14:35:30.536647   25428 start.go:83] releasing machines lock for "multinode-774000", held for 6m25.322684416s
	W1216 14:35:30.536729   25428 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-774000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-774000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 14:35:30.580312   25428 out.go:177] 
	W1216 14:35:30.601373   25428 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 14:35:30.601447   25428 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1216 14:35:30.601513   25428 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1216 14:35:30.622959   25428 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-774000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (108.906982ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:35:30.842240   25724 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (754.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (214.040601ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-774000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- rollout status deployment/busybox: exit status 1 (98.550876ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.494496ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.467481ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.63701ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.998597ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.353632ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.297298ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.218605ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.24533ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.975304ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.989307ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1216 14:36:56.759284   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.574187ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (93.563974ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- exec  -- nslookup kubernetes.io: exit status 1 (94.080256ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- exec  -- nslookup kubernetes.default: exit status 1 (93.566735ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (93.328991ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.575929ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:17.534627   25798 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-774000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (94.672238ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-774000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (108.109331ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:17.793101   25807 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-774000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-774000 -v 3 --alsologtostderr: exit status 80 (199.981062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:37:17.848970   25811 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:37:17.850268   25811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:17.850274   25811 out.go:309] Setting ErrFile to fd 2...
	I1216 14:37:17.850278   25811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:17.850455   25811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:37:17.850824   25811 mustload.go:65] Loading cluster: multinode-774000
	I1216 14:37:17.851120   25811 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:37:17.851542   25811 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:17.901985   25811 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:17.925453   25811 out.go:177] 
	W1216 14:37:17.947000   25811 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:37:17.947028   25811 out.go:239] * 
	* 
	W1216 14:37:17.950683   25811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 14:37:17.971918   25811 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-774000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (106.476117ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:18.154597   25817 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-774000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-774000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (35.903546ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-774000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-774000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-774000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.419038ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:18.352882   25824 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-774000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-381000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-774000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-774000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-774000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (108.592385ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:18.695706   25836 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status --output json --alsologtostderr: exit status 7 (107.599667ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-774000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:37:18.751971   25840 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:37:18.752197   25840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:18.752202   25840 out.go:309] Setting ErrFile to fd 2...
	I1216 14:37:18.752206   25840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:18.752393   25840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:37:18.752583   25840 out.go:303] Setting JSON to true
	I1216 14:37:18.752606   25840 mustload.go:65] Loading cluster: multinode-774000
	I1216 14:37:18.752635   25840 notify.go:220] Checking for updates...
	I1216 14:37:18.752885   25840 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:37:18.752897   25840 status.go:255] checking status of multinode-774000 ...
	I1216 14:37:18.753293   25840 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:18.803413   25840 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:18.803471   25840 status.go:330] multinode-774000 host status = "" (err=state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	)
	I1216 14:37:18.803493   25840 status.go:257] multinode-774000 status: &{Name:multinode-774000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1216 14:37:18.803509   25840 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:37:18.803521   25840 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-774000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.730621ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:18.966115   25846 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 node stop m03: exit status 85 (148.872643ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-774000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status: exit status 7 (108.980633ms)

                                                
                                                
-- stdout --
	multinode-774000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:19.224829   25852 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:37:19.224842   25852 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr: exit status 7 (107.683927ms)

                                                
                                                
-- stdout --
	multinode-774000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:37:19.280271   25856 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:37:19.280513   25856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:19.280519   25856 out.go:309] Setting ErrFile to fd 2...
	I1216 14:37:19.280523   25856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:19.280719   25856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:37:19.280919   25856 out.go:303] Setting JSON to false
	I1216 14:37:19.280942   25856 mustload.go:65] Loading cluster: multinode-774000
	I1216 14:37:19.280975   25856 notify.go:220] Checking for updates...
	I1216 14:37:19.281252   25856 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:37:19.281263   25856 status.go:255] checking status of multinode-774000 ...
	I1216 14:37:19.281659   25856 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:19.332517   25856 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:19.332572   25856 status.go:330] multinode-774000 host status = "" (err=state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	)
	I1216 14:37:19.332592   25856 status.go:257] multinode-774000 status: &{Name:multinode-774000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1216 14:37:19.332611   25856 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:37:19.332618   25856 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr": multinode-774000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:261: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr": multinode-774000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:265: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr": multinode-774000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (108.195403ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:19.495454   25862 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 node start m03 --alsologtostderr: exit status 85 (147.269706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:37:19.608434   25868 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:37:19.609429   25868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:19.609435   25868 out.go:309] Setting ErrFile to fd 2...
	I1216 14:37:19.609439   25868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:19.609628   25868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:37:19.609966   25868 mustload.go:65] Loading cluster: multinode-774000
	I1216 14:37:19.610241   25868 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:37:19.630892   25868 out.go:177] 
	W1216 14:37:19.651997   25868 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1216 14:37:19.652023   25868 out.go:239] * 
	* 
	W1216 14:37:19.656875   25868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 14:37:19.678028   25868 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1216 14:37:19.608434   25868 out.go:296] Setting OutFile to fd 1 ...
I1216 14:37:19.609429   25868 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 14:37:19.609435   25868 out.go:309] Setting ErrFile to fd 2...
I1216 14:37:19.609439   25868 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 14:37:19.609628   25868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
I1216 14:37:19.609966   25868 mustload.go:65] Loading cluster: multinode-774000
I1216 14:37:19.610241   25868 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 14:37:19.630892   25868 out.go:177] 
W1216 14:37:19.651997   25868 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1216 14:37:19.652023   25868 out.go:239] * 
* 
W1216 14:37:19.656875   25868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 14:37:19.678028   25868 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-774000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status: exit status 7 (108.228364ms)

                                                
                                                
-- stdout --
	multinode-774000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:19.808228   25870 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:37:19.808239   25870 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-774000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "e94d859371c42a8cb8325416dd64ddaf72ebc801aeb4921ab10b38251293ef97",
	        "Created": "2023-12-16T22:29:25.071819452Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.663904ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:37:19.971427   25876 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (786.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-774000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-774000
E1216 14:37:27.598124   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-774000: exit status 82 (14.045185583s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-774000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-774000" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-774000 --wait=true -v=8 --alsologtostderr
E1216 14:41:39.863816   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:41:56.813020   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:42:27.651509   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:46:56.818828   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:47:10.713514   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:47:27.656403   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-774000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m51.816932512s)

                                                
                                                
-- stdout --
	* [multinode-774000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-774000 in cluster multinode-774000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* docker "multinode-774000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-774000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:37:34.129960   25901 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:37:34.130187   25901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:34.130193   25901 out.go:309] Setting ErrFile to fd 2...
	I1216 14:37:34.130197   25901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:37:34.130383   25901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:37:34.131814   25901 out.go:303] Setting JSON to false
	I1216 14:37:34.154488   25901 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9423,"bootTime":1702756831,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 14:37:34.154602   25901 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 14:37:34.176515   25901 out.go:177] * [multinode-774000] minikube v1.32.0 on Darwin 14.2
	I1216 14:37:34.219022   25901 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 14:37:34.219199   25901 notify.go:220] Checking for updates...
	I1216 14:37:34.263088   25901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 14:37:34.285008   25901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 14:37:34.307122   25901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 14:37:34.329226   25901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 14:37:34.350939   25901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 14:37:34.372736   25901 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:37:34.372855   25901 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 14:37:34.429610   25901 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 14:37:34.429775   25901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 14:37:34.530840   25901 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-12-16 22:37:34.519970935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 14:37:34.574349   25901 out.go:177] * Using the docker driver based on existing profile
	I1216 14:37:34.595860   25901 start.go:298] selected driver: docker
	I1216 14:37:34.595885   25901 start.go:902] validating driver "docker" against &{Name:multinode-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-774000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 14:37:34.595996   25901 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 14:37:34.596224   25901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 14:37:34.696697   25901 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-12-16 22:37:34.686464357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 14:37:34.699894   25901 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 14:37:34.699969   25901 cni.go:84] Creating CNI manager for ""
	I1216 14:37:34.699980   25901 cni.go:136] 1 nodes found, recommending kindnet
	I1216 14:37:34.699989   25901 start_flags.go:323] config:
	{Name:multinode-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-774000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 14:37:34.743372   25901 out.go:177] * Starting control plane node multinode-774000 in cluster multinode-774000
	I1216 14:37:34.764587   25901 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 14:37:34.813089   25901 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 14:37:34.834258   25901 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:37:34.834362   25901 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 14:37:34.834365   25901 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 14:37:34.834381   25901 cache.go:56] Caching tarball of preloaded images
	I1216 14:37:34.834575   25901 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 14:37:34.834595   25901 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 14:37:34.834752   25901 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/multinode-774000/config.json ...
	I1216 14:37:34.885958   25901 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 14:37:34.885982   25901 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 14:37:34.886016   25901 cache.go:194] Successfully downloaded all kic artifacts
	I1216 14:37:34.886058   25901 start.go:365] acquiring machines lock for multinode-774000: {Name:mkbfbdd77472705ce76cfd99f9e1c31146413090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 14:37:34.886147   25901 start.go:369] acquired machines lock for "multinode-774000" in 68.081µs
	I1216 14:37:34.886175   25901 start.go:96] Skipping create...Using existing machine configuration
	I1216 14:37:34.886183   25901 fix.go:54] fixHost starting: 
	I1216 14:37:34.886400   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:34.936301   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:34.936346   25901 fix.go:102] recreateIfNeeded on multinode-774000: state= err=unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:34.936370   25901 fix.go:107] machineExists: false. err=machine does not exist
	I1216 14:37:34.958153   25901 out.go:177] * docker "multinode-774000" container is missing, will recreate.
	I1216 14:37:35.000637   25901 delete.go:124] DEMOLISHING multinode-774000 ...
	I1216 14:37:35.000820   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:35.053625   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:37:35.053692   25901 stop.go:75] unable to get state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:35.053710   25901 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:35.054084   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:35.105016   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:35.105064   25901 delete.go:82] Unable to get host status for multinode-774000, assuming it has already been deleted: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:35.105141   25901 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:37:35.155803   25901 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:37:35.155833   25901 kic.go:371] could not find the container multinode-774000 to remove it. will try anyways
	I1216 14:37:35.155914   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:35.206517   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:37:35.206563   25901 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:35.206656   25901 cli_runner.go:164] Run: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0"
	W1216 14:37:35.256801   25901 cli_runner.go:211] docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 14:37:35.256832   25901 oci.go:650] error shutdown multinode-774000: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:36.258488   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:36.312236   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:36.312278   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:36.312288   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:36.312326   25901 retry.go:31] will retry after 536.743518ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:36.849837   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:36.904899   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:36.904941   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:36.904957   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:36.904980   25901 retry.go:31] will retry after 718.202798ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:37.625260   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:37.678623   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:37.678666   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:37.678675   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:37.678699   25901 retry.go:31] will retry after 1.027721301s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:38.707194   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:38.759605   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:38.759651   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:38.759661   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:38.759687   25901 retry.go:31] will retry after 1.828786518s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:40.589133   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:40.642112   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:40.642154   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:40.642163   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:40.642188   25901 retry.go:31] will retry after 1.750018373s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:42.393824   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:42.448141   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:42.448185   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:42.448194   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:42.448218   25901 retry.go:31] will retry after 5.666990343s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:48.115949   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:48.170561   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:48.170605   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:48.170619   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:48.170643   25901 retry.go:31] will retry after 3.65294099s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:51.825562   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:37:51.879974   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:37:51.880018   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:37:51.880026   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:37:51.880058   25901 oci.go:88] couldn't shut down multinode-774000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	 
	I1216 14:37:51.880142   25901 cli_runner.go:164] Run: docker rm -f -v multinode-774000
	I1216 14:37:51.931324   25901 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:37:51.981256   25901 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:37:51.981372   25901 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:37:52.031902   25901 cli_runner.go:164] Run: docker network rm multinode-774000
	I1216 14:37:52.125216   25901 fix.go:114] Sleeping 1 second for extra luck!
	I1216 14:37:53.127413   25901 start.go:125] createHost starting for "" (driver="docker")
	I1216 14:37:53.149654   25901 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1216 14:37:53.149826   25901 start.go:159] libmachine.API.Create for "multinode-774000" (driver="docker")
	I1216 14:37:53.149906   25901 client.go:168] LocalClient.Create starting
	I1216 14:37:53.150098   25901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 14:37:53.150186   25901 main.go:141] libmachine: Decoding PEM data...
	I1216 14:37:53.150224   25901 main.go:141] libmachine: Parsing certificate...
	I1216 14:37:53.150338   25901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 14:37:53.150414   25901 main.go:141] libmachine: Decoding PEM data...
	I1216 14:37:53.150430   25901 main.go:141] libmachine: Parsing certificate...
	I1216 14:37:53.171715   25901 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 14:37:53.224124   25901 cli_runner.go:211] docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 14:37:53.224239   25901 network_create.go:281] running [docker network inspect multinode-774000] to gather additional debugging logs...
	I1216 14:37:53.224253   25901 cli_runner.go:164] Run: docker network inspect multinode-774000
	W1216 14:37:53.274745   25901 cli_runner.go:211] docker network inspect multinode-774000 returned with exit code 1
	I1216 14:37:53.274778   25901 network_create.go:284] error running [docker network inspect multinode-774000]: docker network inspect multinode-774000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-774000 not found
	I1216 14:37:53.274796   25901 network_create.go:286] output of [docker network inspect multinode-774000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-774000 not found
	
	** /stderr **
	I1216 14:37:53.274930   25901 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:37:53.326980   25901 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:37:53.327376   25901 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00244cba0}
	I1216 14:37:53.327392   25901 network_create.go:124] attempt to create docker network multinode-774000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1216 14:37:53.327459   25901 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	I1216 14:37:53.413076   25901 network_create.go:108] docker network multinode-774000 192.168.58.0/24 created
	I1216 14:37:53.413114   25901 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-774000" container
	I1216 14:37:53.413223   25901 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 14:37:53.464156   25901 cli_runner.go:164] Run: docker volume create multinode-774000 --label name.minikube.sigs.k8s.io=multinode-774000 --label created_by.minikube.sigs.k8s.io=true
	I1216 14:37:53.514311   25901 oci.go:103] Successfully created a docker volume multinode-774000
	I1216 14:37:53.514424   25901 cli_runner.go:164] Run: docker run --rm --name multinode-774000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-774000 --entrypoint /usr/bin/test -v multinode-774000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 14:37:53.819567   25901 oci.go:107] Successfully prepared a docker volume multinode-774000
	I1216 14:37:53.819612   25901 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:37:53.819628   25901 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 14:37:53.819724   25901 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-774000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 14:43:53.205937   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:43:53.206071   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:53.260708   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:53.260869   25901 retry.go:31] will retry after 193.897884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:53.455143   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:53.509052   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:53.509157   25901 retry.go:31] will retry after 217.290163ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:53.727119   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:53.780231   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:53.780353   25901 retry.go:31] will retry after 537.591747ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:54.320295   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:54.373029   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:43:54.373141   25901 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:43:54.373164   25901 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:54.373222   25901 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:43:54.373279   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:54.423490   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:54.423586   25901 retry.go:31] will retry after 185.401066ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:54.609554   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:54.663885   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:54.663978   25901 retry.go:31] will retry after 283.133051ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:54.947648   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:55.000058   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:55.000160   25901 retry.go:31] will retry after 677.684256ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:55.679462   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:55.733298   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:43:55.733402   25901 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:43:55.733420   25901 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:55.733443   25901 start.go:128] duration metric: createHost completed in 6m2.55123345s
	I1216 14:43:55.733509   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:43:55.733572   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:55.784460   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:55.784560   25901 retry.go:31] will retry after 166.446646ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:55.952900   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:56.005977   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:56.006076   25901 retry.go:31] will retry after 430.938776ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:56.437581   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:56.490158   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:56.490243   25901 retry.go:31] will retry after 341.77128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:56.833488   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:56.885388   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:43:56.885486   25901 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:43:56.885504   25901 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:56.885586   25901 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:43:56.885665   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:56.935499   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:56.935602   25901 retry.go:31] will retry after 307.233964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:57.245230   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:57.300896   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:57.300989   25901 retry.go:31] will retry after 281.469149ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:57.583236   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:57.638320   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:43:57.638432   25901 retry.go:31] will retry after 683.524423ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:58.322455   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:43:58.376167   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:43:58.376261   25901 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:43:58.376277   25901 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:43:58.376295   25901 fix.go:56] fixHost completed within 6m23.435050774s
	I1216 14:43:58.376301   25901 start.go:83] releasing machines lock for "multinode-774000", held for 6m23.435084269s
	W1216 14:43:58.376315   25901 start.go:694] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 14:43:58.376381   25901 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 14:43:58.376388   25901 start.go:709] Will try again in 5 seconds ...
	I1216 14:44:03.377566   25901 start.go:365] acquiring machines lock for multinode-774000: {Name:mkbfbdd77472705ce76cfd99f9e1c31146413090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 14:44:03.377764   25901 start.go:369] acquired machines lock for "multinode-774000" in 154.023µs
	I1216 14:44:03.377800   25901 start.go:96] Skipping create...Using existing machine configuration
	I1216 14:44:03.377807   25901 fix.go:54] fixHost starting: 
	I1216 14:44:03.378175   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:03.432279   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:03.432322   25901 fix.go:102] recreateIfNeeded on multinode-774000: state= err=unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:03.432343   25901 fix.go:107] machineExists: false. err=machine does not exist
	I1216 14:44:03.454072   25901 out.go:177] * docker "multinode-774000" container is missing, will recreate.
	I1216 14:44:03.475590   25901 delete.go:124] DEMOLISHING multinode-774000 ...
	I1216 14:44:03.475732   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:03.526171   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:44:03.526216   25901 stop.go:75] unable to get state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:03.526234   25901 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:03.526606   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:03.576144   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:03.576193   25901 delete.go:82] Unable to get host status for multinode-774000, assuming it has already been deleted: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:03.576280   25901 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:44:03.626446   25901 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:44:03.626476   25901 kic.go:371] could not find the container multinode-774000 to remove it. will try anyways
	I1216 14:44:03.626554   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:03.676831   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:44:03.676875   25901 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:03.676952   25901 cli_runner.go:164] Run: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0"
	W1216 14:44:03.726498   25901 cli_runner.go:211] docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 14:44:03.726532   25901 oci.go:650] error shutdown multinode-774000: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:04.727457   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:04.779674   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:04.779726   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:04.779747   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:04.779769   25901 retry.go:31] will retry after 574.669391ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:05.356440   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:05.410832   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:05.410874   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:05.410884   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:05.410910   25901 retry.go:31] will retry after 609.641905ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:06.022948   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:06.073728   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:06.073774   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:06.073785   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:06.073812   25901 retry.go:31] will retry after 1.193954814s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:07.268204   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:07.321101   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:07.321153   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:07.321163   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:07.321186   25901 retry.go:31] will retry after 956.222936ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:08.278229   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:08.333739   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:08.333788   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:08.333798   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:08.333819   25901 retry.go:31] will retry after 3.078039504s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:11.413936   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:11.466522   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:11.466573   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:11.466583   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:11.466604   25901 retry.go:31] will retry after 3.312914213s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:14.781913   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:14.836563   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:14.836607   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:14.836623   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:14.836647   25901 retry.go:31] will retry after 4.185147498s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:19.022458   25901 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:44:19.075676   25901 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:44:19.075719   25901 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:44:19.075728   25901 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:44:19.075756   25901 oci.go:88] couldn't shut down multinode-774000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	 
	I1216 14:44:19.075836   25901 cli_runner.go:164] Run: docker rm -f -v multinode-774000
	I1216 14:44:19.126768   25901 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:44:19.177213   25901 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:44:19.177339   25901 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:44:19.227980   25901 cli_runner.go:164] Run: docker network rm multinode-774000
	I1216 14:44:19.336678   25901 fix.go:114] Sleeping 1 second for extra luck!
	I1216 14:44:20.338047   25901 start.go:125] createHost starting for "" (driver="docker")
	I1216 14:44:20.360329   25901 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1216 14:44:20.360553   25901 start.go:159] libmachine.API.Create for "multinode-774000" (driver="docker")
	I1216 14:44:20.360590   25901 client.go:168] LocalClient.Create starting
	I1216 14:44:20.360849   25901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 14:44:20.360945   25901 main.go:141] libmachine: Decoding PEM data...
	I1216 14:44:20.360970   25901 main.go:141] libmachine: Parsing certificate...
	I1216 14:44:20.361069   25901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 14:44:20.361145   25901 main.go:141] libmachine: Decoding PEM data...
	I1216 14:44:20.361163   25901 main.go:141] libmachine: Parsing certificate...
	I1216 14:44:20.404029   25901 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 14:44:20.455832   25901 cli_runner.go:211] docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 14:44:20.455936   25901 network_create.go:281] running [docker network inspect multinode-774000] to gather additional debugging logs...
	I1216 14:44:20.455957   25901 cli_runner.go:164] Run: docker network inspect multinode-774000
	W1216 14:44:20.506632   25901 cli_runner.go:211] docker network inspect multinode-774000 returned with exit code 1
	I1216 14:44:20.506672   25901 network_create.go:284] error running [docker network inspect multinode-774000]: docker network inspect multinode-774000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-774000 not found
	I1216 14:44:20.506682   25901 network_create.go:286] output of [docker network inspect multinode-774000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-774000 not found
	
	** /stderr **
	I1216 14:44:20.506839   25901 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:44:20.559508   25901 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:44:20.560899   25901 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:44:20.561251   25901 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00049da50}
	I1216 14:44:20.561266   25901 network_create.go:124] attempt to create docker network multinode-774000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1216 14:44:20.561343   25901 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	W1216 14:44:20.611984   25901 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000 returned with exit code 1
	W1216 14:44:20.612018   25901 network_create.go:149] failed to create docker network multinode-774000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1216 14:44:20.612042   25901 network_create.go:116] failed to create docker network multinode-774000 192.168.67.0/24, will retry: subnet is taken
	I1216 14:44:20.613497   25901 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:44:20.613864   25901 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f8e70}
	I1216 14:44:20.613883   25901 network_create.go:124] attempt to create docker network multinode-774000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1216 14:44:20.613958   25901 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	I1216 14:44:20.701790   25901 network_create.go:108] docker network multinode-774000 192.168.76.0/24 created
	I1216 14:44:20.701910   25901 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-774000" container
	I1216 14:44:20.702015   25901 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 14:44:20.753663   25901 cli_runner.go:164] Run: docker volume create multinode-774000 --label name.minikube.sigs.k8s.io=multinode-774000 --label created_by.minikube.sigs.k8s.io=true
	I1216 14:44:20.803319   25901 oci.go:103] Successfully created a docker volume multinode-774000
	I1216 14:44:20.803440   25901 cli_runner.go:164] Run: docker run --rm --name multinode-774000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-774000 --entrypoint /usr/bin/test -v multinode-774000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 14:44:21.095301   25901 oci.go:107] Successfully prepared a docker volume multinode-774000
	I1216 14:44:21.095334   25901 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:44:21.095346   25901 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 14:44:21.095460   25901 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-774000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 14:50:20.366641   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:50:20.366767   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:20.421583   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:20.421715   25901 retry.go:31] will retry after 288.252349ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:20.710279   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:20.764777   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:20.764892   25901 retry.go:31] will retry after 344.308059ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:21.110550   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:21.162262   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:21.162365   25901 retry.go:31] will retry after 313.391206ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:21.478117   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:21.532028   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:50:21.532130   25901 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:50:21.532149   25901 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:21.532211   25901 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:50:21.532264   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:21.583051   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:21.583151   25901 retry.go:31] will retry after 137.20258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:21.721213   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:21.772734   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:21.772850   25901 retry.go:31] will retry after 432.108878ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:22.205682   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:22.257801   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:22.257899   25901 retry.go:31] will retry after 381.19675ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:22.641470   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:22.695265   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:50:22.695365   25901 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:50:22.695385   25901 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:22.695395   25901 start.go:128] duration metric: createHost completed in 6m2.351677273s
	I1216 14:50:22.695463   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 14:50:22.695513   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:22.745871   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:22.745965   25901 retry.go:31] will retry after 222.583491ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:22.970915   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:23.025300   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:23.025394   25901 retry.go:31] will retry after 436.882389ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:23.462817   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:23.518019   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:23.518113   25901 retry.go:31] will retry after 741.524008ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:24.261315   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:24.316322   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:50:24.316421   25901 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:50:24.316446   25901 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:24.316518   25901 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 14:50:24.316576   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:24.369065   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:24.369168   25901 retry.go:31] will retry after 318.881591ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:24.688515   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:24.742671   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:24.742776   25901 retry.go:31] will retry after 218.534159ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:24.963696   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:25.018180   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	I1216 14:50:25.018279   25901 retry.go:31] will retry after 709.12513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:25.727657   25901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000
	W1216 14:50:25.780467   25901 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000 returned with exit code 1
	W1216 14:50:25.780567   25901 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	W1216 14:50:25.780582   25901 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-774000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:25.780597   25901 fix.go:56] fixHost completed within 6m22.396937347s
	I1216 14:50:25.780605   25901 start.go:83] releasing machines lock for "multinode-774000", held for 6m22.396974695s
	W1216 14:50:25.780682   25901 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-774000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-774000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1216 14:50:25.823839   25901 out.go:177] 
	W1216 14:50:25.846992   25901 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1216 14:50:25.847059   25901 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1216 14:50:25.847103   25901 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1216 14:50:25.890616   25901 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-774000" : exit status 52
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-774000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "866d54c4724c953a67980bbd1b6fe645257d162995a3b1c2c7a05c1c8c45c997",
	        "Created": "2023-12-16T22:44:20.662294945Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.73014ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:50:26.185205   26316 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (786.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 node delete m03: exit status 80 (203.687632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-774000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr: exit status 7 (108.689469ms)

                                                
                                                
-- stdout --
	multinode-774000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:50:26.445565   26325 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:50:26.445816   26325 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:50:26.445822   26325 out.go:309] Setting ErrFile to fd 2...
	I1216 14:50:26.445827   26325 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:50:26.446007   26325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:50:26.446194   26325 out.go:303] Setting JSON to false
	I1216 14:50:26.446222   26325 mustload.go:65] Loading cluster: multinode-774000
	I1216 14:50:26.446256   26325 notify.go:220] Checking for updates...
	I1216 14:50:26.446515   26325 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:50:26.446529   26325 status.go:255] checking status of multinode-774000 ...
	I1216 14:50:26.446932   26325 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:26.497971   26325 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:26.498038   26325 status.go:330] multinode-774000 host status = "" (err=state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	)
	I1216 14:50:26.498055   26325 status.go:257] multinode-774000 status: &{Name:multinode-774000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1216 14:50:26.498075   26325 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:50:26.498082   26325 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "866d54c4724c953a67980bbd1b6fe645257d162995a3b1c2c7a05c1c8c45c997",
	        "Created": "2023-12-16T22:44:20.662294945Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.980394ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:50:26.660061   26331 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 stop: exit status 82 (15.0097708s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	* Stopping node "multinode-774000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-774000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-774000 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status: exit status 7 (108.13165ms)

                                                
                                                
-- stdout --
	multinode-774000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:50:41.778493   26360 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:50:41.778505   26360 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr: exit status 7 (108.142826ms)

                                                
                                                
-- stdout --
	multinode-774000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:50:41.834066   26364 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:50:41.834307   26364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:50:41.834314   26364 out.go:309] Setting ErrFile to fd 2...
	I1216 14:50:41.834318   26364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:50:41.834515   26364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:50:41.834704   26364 out.go:303] Setting JSON to false
	I1216 14:50:41.834728   26364 mustload.go:65] Loading cluster: multinode-774000
	I1216 14:50:41.834757   26364 notify.go:220] Checking for updates...
	I1216 14:50:41.835016   26364 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:50:41.835028   26364 status.go:255] checking status of multinode-774000 ...
	I1216 14:50:41.835431   26364 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:41.886623   26364 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:41.886691   26364 status.go:330] multinode-774000 host status = "" (err=state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	)
	I1216 14:50:41.886708   26364 status.go:257] multinode-774000 status: &{Name:multinode-774000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1216 14:50:41.886727   26364 status.go:260] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	E1216 14:50:41.886734   26364 status.go:263] The "multinode-774000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr": multinode-774000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-774000 status --alsologtostderr": multinode-774000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "866d54c4724c953a67980bbd1b6fe645257d162995a3b1c2c7a05c1c8c45c997",
	        "Created": "2023-12-16T22:44:20.662294945Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.35621ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:50:42.048692   26370 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (134.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-774000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1216 14:51:56.822235   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:52:27.660892   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-774000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m14.482047872s)

                                                
                                                
-- stdout --
	* [multinode-774000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-774000 in cluster multinode-774000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* docker "multinode-774000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 14:50:42.159855   26376 out.go:296] Setting OutFile to fd 1 ...
	I1216 14:50:42.160061   26376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:50:42.160068   26376 out.go:309] Setting ErrFile to fd 2...
	I1216 14:50:42.160072   26376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 14:50:42.160266   26376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 14:50:42.161647   26376 out.go:303] Setting JSON to false
	I1216 14:50:42.185947   26376 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10211,"bootTime":1702756831,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 14:50:42.186052   26376 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 14:50:42.208009   26376 out.go:177] * [multinode-774000] minikube v1.32.0 on Darwin 14.2
	I1216 14:50:42.249958   26376 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 14:50:42.250031   26376 notify.go:220] Checking for updates...
	I1216 14:50:42.292663   26376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 14:50:42.313880   26376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 14:50:42.335836   26376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 14:50:42.356524   26376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 14:50:42.398755   26376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 14:50:42.420236   26376 config.go:182] Loaded profile config "multinode-774000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 14:50:42.420704   26376 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 14:50:42.477041   26376 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 14:50:42.477212   26376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 14:50:42.581060   26376 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-12-16 22:50:42.570189302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 14:50:42.602374   26376 out.go:177] * Using the docker driver based on existing profile
	I1216 14:50:42.623405   26376 start.go:298] selected driver: docker
	I1216 14:50:42.623430   26376 start.go:902] validating driver "docker" against &{Name:multinode-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-774000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 14:50:42.623540   26376 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 14:50:42.623793   26376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 14:50:42.729048   26376 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-12-16 22:50:42.718588131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 14:50:42.732434   26376 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 14:50:42.732503   26376 cni.go:84] Creating CNI manager for ""
	I1216 14:50:42.732513   26376 cni.go:136] 1 nodes found, recommending kindnet
	I1216 14:50:42.732524   26376 start_flags.go:323] config:
	{Name:multinode-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-774000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 14:50:42.775739   26376 out.go:177] * Starting control plane node multinode-774000 in cluster multinode-774000
	I1216 14:50:42.796486   26376 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 14:50:42.838633   26376 out.go:177] * Pulling base image v0.0.42-1702660877-17806 ...
	I1216 14:50:42.859567   26376 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:50:42.859657   26376 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 14:50:42.859663   26376 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 14:50:42.859675   26376 cache.go:56] Caching tarball of preloaded images
	I1216 14:50:42.859898   26376 preload.go:174] Found /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 14:50:42.859925   26376 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 14:50:42.860139   26376 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/multinode-774000/config.json ...
	I1216 14:50:42.911785   26376 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon, skipping pull
	I1216 14:50:42.911809   26376 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in daemon, skipping load
	I1216 14:50:42.911840   26376 cache.go:194] Successfully downloaded all kic artifacts
	I1216 14:50:42.911886   26376 start.go:365] acquiring machines lock for multinode-774000: {Name:mkbfbdd77472705ce76cfd99f9e1c31146413090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 14:50:42.911980   26376 start.go:369] acquired machines lock for "multinode-774000" in 70.574µs
	I1216 14:50:42.912000   26376 start.go:96] Skipping create...Using existing machine configuration
	I1216 14:50:42.912009   26376 fix.go:54] fixHost starting: 
	I1216 14:50:42.912243   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:42.962332   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:42.962403   26376 fix.go:102] recreateIfNeeded on multinode-774000: state= err=unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:42.962424   26376 fix.go:107] machineExists: false. err=machine does not exist
	I1216 14:50:42.984278   26376 out.go:177] * docker "multinode-774000" container is missing, will recreate.
	I1216 14:50:43.027822   26376 delete.go:124] DEMOLISHING multinode-774000 ...
	I1216 14:50:43.027939   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:43.079432   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:50:43.079485   26376 stop.go:75] unable to get state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:43.079502   26376 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:43.079851   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:43.129722   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:43.129783   26376 delete.go:82] Unable to get host status for multinode-774000, assuming it has already been deleted: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:43.129861   26376 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:50:43.180086   26376 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:50:43.180116   26376 kic.go:371] could not find the container multinode-774000 to remove it. will try anyways
	I1216 14:50:43.180183   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:43.230614   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	W1216 14:50:43.230658   26376 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:43.230748   26376 cli_runner.go:164] Run: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0"
	W1216 14:50:43.281235   26376 cli_runner.go:211] docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1216 14:50:43.281263   26376 oci.go:650] error shutdown multinode-774000: docker exec --privileged -t multinode-774000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:44.282567   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:44.334411   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:44.334459   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:44.334470   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:44.334512   26376 retry.go:31] will retry after 369.316419ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:44.706162   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:44.759719   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:44.759761   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:44.759770   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:44.759797   26376 retry.go:31] will retry after 560.059109ms: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:45.321255   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:45.374203   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:45.374250   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:45.374259   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:45.374284   26376 retry.go:31] will retry after 1.282808218s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:46.658726   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:46.713611   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:46.713652   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:46.713658   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:46.713684   26376 retry.go:31] will retry after 2.254844028s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:48.968823   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:49.022670   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:49.022718   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:49.022732   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:49.022754   26376 retry.go:31] will retry after 3.175257813s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:52.200441   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:52.255124   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:52.255168   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:52.255175   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:52.255212   26376 retry.go:31] will retry after 5.433668247s: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:57.691118   26376 cli_runner.go:164] Run: docker container inspect multinode-774000 --format={{.State.Status}}
	W1216 14:50:57.744616   26376 cli_runner.go:211] docker container inspect multinode-774000 --format={{.State.Status}} returned with exit code 1
	I1216 14:50:57.744660   26376 oci.go:662] temporary error verifying shutdown: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	I1216 14:50:57.744672   26376 oci.go:664] temporary error: container multinode-774000 status is  but expect it to be exited
	I1216 14:50:57.744701   26376 oci.go:88] couldn't shut down multinode-774000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000
	 
	I1216 14:50:57.744784   26376 cli_runner.go:164] Run: docker rm -f -v multinode-774000
	I1216 14:50:57.795464   26376 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-774000
	W1216 14:50:57.845994   26376 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-774000 returned with exit code 1
	I1216 14:50:57.846109   26376 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:50:57.896493   26376 cli_runner.go:164] Run: docker network rm multinode-774000
	I1216 14:50:57.996118   26376 fix.go:114] Sleeping 1 second for extra luck!
	I1216 14:50:58.996552   26376 start.go:125] createHost starting for "" (driver="docker")
	I1216 14:50:59.019053   26376 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1216 14:50:59.019233   26376 start.go:159] libmachine.API.Create for "multinode-774000" (driver="docker")
	I1216 14:50:59.019290   26376 client.go:168] LocalClient.Create starting
	I1216 14:50:59.019489   26376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/ca.pem
	I1216 14:50:59.019579   26376 main.go:141] libmachine: Decoding PEM data...
	I1216 14:50:59.019621   26376 main.go:141] libmachine: Parsing certificate...
	I1216 14:50:59.019763   26376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17806-19996/.minikube/certs/cert.pem
	I1216 14:50:59.019848   26376 main.go:141] libmachine: Decoding PEM data...
	I1216 14:50:59.019866   26376 main.go:141] libmachine: Parsing certificate...
	I1216 14:50:59.020802   26376 cli_runner.go:164] Run: docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 14:50:59.074296   26376 cli_runner.go:211] docker network inspect multinode-774000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 14:50:59.074385   26376 network_create.go:281] running [docker network inspect multinode-774000] to gather additional debugging logs...
	I1216 14:50:59.074404   26376 cli_runner.go:164] Run: docker network inspect multinode-774000
	W1216 14:50:59.124876   26376 cli_runner.go:211] docker network inspect multinode-774000 returned with exit code 1
	I1216 14:50:59.124916   26376 network_create.go:284] error running [docker network inspect multinode-774000]: docker network inspect multinode-774000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-774000 not found
	I1216 14:50:59.124926   26376 network_create.go:286] output of [docker network inspect multinode-774000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-774000 not found
	
	** /stderr **
	I1216 14:50:59.125053   26376 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 14:50:59.177803   26376 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 14:50:59.178209   26376 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002598a60}
	I1216 14:50:59.178228   26376 network_create.go:124] attempt to create docker network multinode-774000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1216 14:50:59.178310   26376 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-774000 multinode-774000
	I1216 14:50:59.264483   26376 network_create.go:108] docker network multinode-774000 192.168.58.0/24 created
	I1216 14:50:59.264518   26376 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-774000" container
	I1216 14:50:59.264625   26376 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 14:50:59.316431   26376 cli_runner.go:164] Run: docker volume create multinode-774000 --label name.minikube.sigs.k8s.io=multinode-774000 --label created_by.minikube.sigs.k8s.io=true
	I1216 14:50:59.367401   26376 oci.go:103] Successfully created a docker volume multinode-774000
	I1216 14:50:59.367515   26376 cli_runner.go:164] Run: docker run --rm --name multinode-774000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-774000 --entrypoint /usr/bin/test -v multinode-774000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -d /var/lib
	I1216 14:50:59.661101   26376 oci.go:107] Successfully prepared a docker volume multinode-774000
	I1216 14:50:59.661135   26376 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 14:50:59.661150   26376 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 14:50:59.661283   26376 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-774000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-774000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-774000
helpers_test.go:235: (dbg) docker inspect multinode-774000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-774000",
	        "Id": "9627b2f6f69b2b7cb239733b6e5c9fa1035fbb78422341ee895a8ea21240ab8c",
	        "Created": "2023-12-16T22:50:59.224987833Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-774000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-774000 -n multinode-774000: exit status 7 (107.255698ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 14:52:56.753526   26524 status.go:249] status error: host: state: unknown state "multinode-774000": docker container inspect multinode-774000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-774000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-774000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (134.70s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-128000 --memory=2048 --driver=docker 
E1216 14:56:56.827925   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 14:57:27.667149   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 14:58:19.880947   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-128000 --memory=2048 --driver=docker : signal: killed (5m0.003489615s)

                                                
                                                
-- stdout --
	* [scheduled-stop-128000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-128000 in cluster scheduled-stop-128000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-128000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-128000 in cluster scheduled-stop-128000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-12-16 15:00:38.971382 -0800 PST m=+4777.714807189
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-128000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-128000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-128000",
	        "Id": "d4232ff7866c21793f1d8aea28329fd45c4df9ec1b4d5ac178b38e0495a88283",
	        "Created": "2023-12-16T22:55:40.006594709Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-128000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-128000 -n scheduled-stop-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-128000 -n scheduled-stop-128000: exit status 7 (107.156107ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 15:00:39.134927   27177 status.go:249] status error: host: state: unknown state "scheduled-stop-128000": docker container inspect scheduled-stop-128000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-128000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-128000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-128000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-128000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.9s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2559894838 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-159000 --memory=2600 --driver=docker 
E1216 15:01:56.831297   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:02:27.670338   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
E1216 15:03:50.729306   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-159000 --memory=2600 --driver=docker : signal: killed (4m57.964833647s)

                                                
                                                
-- stdout --
	* [skaffold-159000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-159000 in cluster skaffold-159000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-159000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-159000 in cluster skaffold-159000
	* Pulling base image v0.0.42-1702660877-17806 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-12-16 15:05:39.871289 -0800 PST m=+5078.610107268
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-159000
helpers_test.go:235: (dbg) docker inspect skaffold-159000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-159000",
	        "Id": "7c8de2ec122583bcf13f7bf4604c93f7914694d0b25c778e21ec408a93ea48e3",
	        "Created": "2023-12-16T23:00:42.96966861Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-159000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-159000 -n skaffold-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-159000 -n skaffold-159000: exit status 7 (108.122007ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 15:05:40.031940   27427 status.go:249] status error: host: state: unknown state "skaffold-159000": docker container inspect skaffold-159000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-159000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-159000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-159000
--- FAIL: TestSkaffold (300.90s)

                                                
                                    
x
+
TestInsufficientStorage (300.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-419000 --memory=2048 --output=json --wait=true --driver=docker 
E1216 15:06:56.836246   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 15:07:27.675057   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/functional-927000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-419000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003207142s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54161f03-961d-483d-8f47-7a8aabf718a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-419000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c497b4cd-69c5-4852-be99-b8a69080622c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17806"}}
	{"specversion":"1.0","id":"11cbacfb-a656-4f6a-82ff-761e7e80b15a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig"}}
	{"specversion":"1.0","id":"07c0e5d0-2093-47f1-9ea9-8e43bb708450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"365a0efc-9b91-45cd-8da2-0c0116cd9361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"54e021d8-3b9a-45c4-9e8d-06dc36e98d7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube"}}
	{"specversion":"1.0","id":"82120955-26f1-4e6c-b17f-96224894fe3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bb8985d7-e91b-4db5-8392-378747df827e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e38e5d72-3ad8-48e1-ae9f-0ccae22d7a77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1d4551d6-2e06-475f-832b-ee6abccf8544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"76d7f071-30f4-4f6d-af9b-3f4971a319c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"28bd99a8-a15c-442e-9e24-2c2198682c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-419000 in cluster insufficient-storage-419000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c173d6c-707f-486a-9a04-a014d1779037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702660877-17806 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"877f2cab-4a5a-4ffc-8480-157bb00968bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-419000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-419000 --output=json --layout=cluster: context deadline exceeded (743ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-419000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-419000
--- FAIL: TestInsufficientStorage (300.74s)

                                                
                                    

Test pass (144/191)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 39.08
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.28.4/json-events 42.01
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.32
17 TestDownloadOnly/v1.29.0-rc.2/json-events 41.78
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.3
23 TestDownloadOnly/DeleteAll 0.65
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
25 TestDownloadOnlyKic 2.09
26 TestBinaryMirror 1.69
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
32 TestAddons/Setup 226.34
36 TestAddons/parallel/InspektorGadget 11.05
37 TestAddons/parallel/MetricsServer 6.83
38 TestAddons/parallel/HelmTiller 11.08
40 TestAddons/parallel/CSI 65.12
41 TestAddons/parallel/Headlamp 14.51
42 TestAddons/parallel/CloudSpanner 5.71
43 TestAddons/parallel/LocalPath 54.17
44 TestAddons/parallel/NvidiaDevicePlugin 5.67
47 TestAddons/serial/GCPAuth/Namespaces 0.1
48 TestAddons/StoppedEnableDisable 11.86
56 TestHyperKitDriverInstallOrUpdate 6.69
59 TestErrorSpam/setup 22.51
60 TestErrorSpam/start 2.13
61 TestErrorSpam/status 1.24
62 TestErrorSpam/pause 1.77
63 TestErrorSpam/unpause 1.84
64 TestErrorSpam/stop 2.86
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 37.96
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.84
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 9.67
76 TestFunctional/serial/CacheCmd/cache/add_local 1.68
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
80 TestFunctional/serial/CacheCmd/cache/cache_reload 3.3
81 TestFunctional/serial/CacheCmd/cache/delete 0.17
82 TestFunctional/serial/MinikubeKubectlCmd 0.66
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.79
84 TestFunctional/serial/ExtraConfig 40.13
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 3.08
87 TestFunctional/serial/LogsFileCmd 3.14
88 TestFunctional/serial/InvalidService 4.74
90 TestFunctional/parallel/ConfigCmd 0.52
91 TestFunctional/parallel/DashboardCmd 14.27
92 TestFunctional/parallel/DryRun 1.61
93 TestFunctional/parallel/InternationalLanguage 0.64
94 TestFunctional/parallel/StatusCmd 1.3
99 TestFunctional/parallel/AddonsCmd 0.26
100 TestFunctional/parallel/PersistentVolumeClaim 27.29
102 TestFunctional/parallel/SSHCmd 0.77
103 TestFunctional/parallel/CpCmd 2.89
104 TestFunctional/parallel/MySQL 35.7
105 TestFunctional/parallel/FileSync 0.49
106 TestFunctional/parallel/CertSync 2.9
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
114 TestFunctional/parallel/License 0.56
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 0.65
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.16
122 TestFunctional/parallel/ImageCommands/Setup 3.04
123 TestFunctional/parallel/DockerEnv/bash 2.16
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.61
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.81
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.11
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.09
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.73
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.93
134 TestFunctional/parallel/ServiceCmd/DeployApp 20.2
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.19
140 TestFunctional/parallel/ServiceCmd/List 0.68
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
142 TestFunctional/parallel/ServiceCmd/HTTPS 15
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
149 TestFunctional/parallel/ServiceCmd/Format 15
150 TestFunctional/parallel/ServiceCmd/URL 15
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
152 TestFunctional/parallel/ProfileCmd/profile_list 0.57
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.69
156 TestFunctional/parallel/MountCmd/VerifyCleanup 2.53
157 TestFunctional/delete_addon-resizer_images 0.14
158 TestFunctional/delete_my-image_image 0.05
159 TestFunctional/delete_minikube_cached_images 0.05
163 TestImageBuild/serial/Setup 21.51
164 TestImageBuild/serial/NormalBuild 3.15
165 TestImageBuild/serial/BuildWithBuildArg 1.3
166 TestImageBuild/serial/BuildWithDockerIgnore 1.11
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.11
177 TestJSONOutput/start/Command 41.26
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.55
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.65
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 10.88
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.77
202 TestKicCustomNetwork/create_custom_network 24.7
203 TestKicCustomNetwork/use_default_bridge_network 23.26
204 TestKicExistingNetwork 24
205 TestKicCustomSubnet 24.15
206 TestKicStaticIP 23.55
207 TestMainNoArgs 0.08
208 TestMinikubeProfile 50.09
211 TestMountStart/serial/StartWithMountFirst 7.18
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 7.42
233 TestPreload 161.32
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.19
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.36
x
+
TestDownloadOnly/v1.16.0/json-events (39.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (39.082643322s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (39.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-152000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-152000: exit status 85 (290.719096ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-152000 | jenkins | v1.32.0 | 16 Dec 23 13:41 PST |          |
	|         | -p download-only-152000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/16 13:41:01
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 13:41:01.234356   20440 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:41:01.234657   20440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:41:01.234663   20440 out.go:309] Setting ErrFile to fd 2...
	I1216 13:41:01.234667   20440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:41:01.234851   20440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	W1216 13:41:01.234954   20440 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17806-19996/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17806-19996/.minikube/config/config.json: no such file or directory
	I1216 13:41:01.236657   20440 out.go:303] Setting JSON to true
	I1216 13:41:01.261679   20440 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6030,"bootTime":1702756831,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 13:41:01.261799   20440 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 13:41:01.284652   20440 out.go:97] [download-only-152000] minikube v1.32.0 on Darwin 14.2
	I1216 13:41:01.305408   20440 out.go:169] MINIKUBE_LOCATION=17806
	I1216 13:41:01.284840   20440 notify.go:220] Checking for updates...
	W1216 13:41:01.284832   20440 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 13:41:01.347459   20440 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 13:41:01.368573   20440 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 13:41:01.389426   20440 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 13:41:01.410461   20440 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	W1216 13:41:01.452271   20440 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 13:41:01.452730   20440 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 13:41:01.510374   20440 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 13:41:01.510517   20440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:41:01.617021   20440 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:41:01.606858426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:41:01.639545   20440 out.go:97] Using the docker driver based on user configuration
	I1216 13:41:01.639567   20440 start.go:298] selected driver: docker
	I1216 13:41:01.639573   20440 start.go:902] validating driver "docker" against <nil>
	I1216 13:41:01.639687   20440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:41:01.746569   20440 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:41:01.736554325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:41:01.746755   20440 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1216 13:41:01.751220   20440 start_flags.go:394] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I1216 13:41:01.751572   20440 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 13:41:01.773424   20440 out.go:169] Using Docker Desktop driver with root privileges
	I1216 13:41:01.794396   20440 cni.go:84] Creating CNI manager for ""
	I1216 13:41:01.794438   20440 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 13:41:01.794456   20440 start_flags.go:323] config:
	{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-152000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:41:01.816371   20440 out.go:97] Starting control plane node download-only-152000 in cluster download-only-152000
	I1216 13:41:01.816393   20440 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 13:41:01.837242   20440 out.go:97] Pulling base image v0.0.42-1702660877-17806 ...
	I1216 13:41:01.837302   20440 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1216 13:41:01.837400   20440 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 13:41:01.888600   20440 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 to local cache
	I1216 13:41:01.889207   20440 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local cache directory
	I1216 13:41:01.889346   20440 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 to local cache
	I1216 13:41:01.891352   20440 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1216 13:41:01.891366   20440 cache.go:56] Caching tarball of preloaded images
	I1216 13:41:01.891883   20440 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1216 13:41:01.912375   20440 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1216 13:41:01.912398   20440 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:41:01.990949   20440 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1216 13:41:06.793386   20440 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:41:06.793561   20440 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:41:07.342386   20440 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1216 13:41:07.342648   20440 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/download-only-152000/config.json ...
	I1216 13:41:07.342673   20440 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/download-only-152000/config.json: {Name:mkfd6b219d59e62c5f8bdeca36e676bacff31d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 13:41:07.344053   20440 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1216 13:41:07.344978   20440 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I1216 13:41:18.641312   20440 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-152000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (42.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (42.012613229s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (42.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-152000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-152000: exit status 85 (317.253275ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-152000 | jenkins | v1.32.0 | 16 Dec 23 13:41 PST |          |
	|         | -p download-only-152000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-152000 | jenkins | v1.32.0 | 16 Dec 23 13:41 PST |          |
	|         | -p download-only-152000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/16 13:41:40
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 13:41:40.614228   20483 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:41:40.614450   20483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:41:40.614455   20483 out.go:309] Setting ErrFile to fd 2...
	I1216 13:41:40.614460   20483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:41:40.614658   20483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	W1216 13:41:40.614765   20483 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17806-19996/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17806-19996/.minikube/config/config.json: no such file or directory
	I1216 13:41:40.616330   20483 out.go:303] Setting JSON to true
	I1216 13:41:40.641618   20483 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6069,"bootTime":1702756831,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 13:41:40.641730   20483 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 13:41:40.663121   20483 out.go:97] [download-only-152000] minikube v1.32.0 on Darwin 14.2
	I1216 13:41:40.684108   20483 out.go:169] MINIKUBE_LOCATION=17806
	I1216 13:41:40.663220   20483 notify.go:220] Checking for updates...
	I1216 13:41:40.727198   20483 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 13:41:40.748437   20483 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 13:41:40.769211   20483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 13:41:40.790335   20483 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	W1216 13:41:40.837017   20483 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 13:41:40.837522   20483 config.go:182] Loaded profile config "download-only-152000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1216 13:41:40.837586   20483 start.go:810] api.Load failed for download-only-152000: filestore "download-only-152000": Docker machine "download-only-152000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1216 13:41:40.837733   20483 driver.go:392] Setting default libvirt URI to qemu:///system
	W1216 13:41:40.837760   20483 start.go:810] api.Load failed for download-only-152000: filestore "download-only-152000": Docker machine "download-only-152000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1216 13:41:40.893707   20483 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 13:41:40.893852   20483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:41:41.001696   20483 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:41:40.991157269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:41:41.023890   20483 out.go:97] Using the docker driver based on existing profile
	I1216 13:41:41.023923   20483 start.go:298] selected driver: docker
	I1216 13:41:41.023928   20483 start.go:902] validating driver "docker" against &{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-152000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:41:41.024085   20483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:41:41.129189   20483 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:41:41.119483656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:41:41.132530   20483 cni.go:84] Creating CNI manager for ""
	I1216 13:41:41.132556   20483 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 13:41:41.132568   20483 start_flags.go:323] config:
	{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-152000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:41:41.153963   20483 out.go:97] Starting control plane node download-only-152000 in cluster download-only-152000
	I1216 13:41:41.153980   20483 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 13:41:41.174755   20483 out.go:97] Pulling base image v0.0.42-1702660877-17806 ...
	I1216 13:41:41.174806   20483 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 13:41:41.174880   20483 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 13:41:41.225866   20483 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 to local cache
	I1216 13:41:41.226032   20483 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local cache directory
	I1216 13:41:41.226068   20483 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local cache directory, skipping pull
	I1216 13:41:41.226076   20483 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in cache, skipping pull
	I1216 13:41:41.226083   20483 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 as a tarball
	I1216 13:41:41.226092   20483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 13:41:41.226116   20483 cache.go:56] Caching tarball of preloaded images
	I1216 13:41:41.226500   20483 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 13:41:41.247710   20483 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1216 13:41:41.247754   20483 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:41:41.320821   20483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1216 13:41:46.539153   20483 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:41:46.539493   20483 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:41:47.165012   20483 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1216 13:41:47.165104   20483 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/download-only-152000/config.json ...
	I1216 13:41:47.166077   20483 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1216 13:41:47.167213   20483 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-152000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (41.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (41.779593106s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (41.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-152000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-152000: exit status 85 (295.525877ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-152000 | jenkins | v1.32.0 | 16 Dec 23 13:41 PST |          |
	|         | -p download-only-152000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-152000 | jenkins | v1.32.0 | 16 Dec 23 13:41 PST |          |
	|         | -p download-only-152000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-152000 | jenkins | v1.32.0 | 16 Dec 23 13:42 PST |          |
	|         | -p download-only-152000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/16 13:42:22
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 13:42:22.945825   20518 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:42:22.946049   20518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:42:22.946055   20518 out.go:309] Setting ErrFile to fd 2...
	I1216 13:42:22.946059   20518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:42:22.946254   20518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	W1216 13:42:22.946358   20518 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17806-19996/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17806-19996/.minikube/config/config.json: no such file or directory
	I1216 13:42:22.947953   20518 out.go:303] Setting JSON to true
	I1216 13:42:22.973378   20518 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6111,"bootTime":1702756831,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 13:42:22.973552   20518 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 13:42:22.995566   20518 out.go:97] [download-only-152000] minikube v1.32.0 on Darwin 14.2
	I1216 13:42:23.017516   20518 out.go:169] MINIKUBE_LOCATION=17806
	I1216 13:42:22.995768   20518 notify.go:220] Checking for updates...
	I1216 13:42:23.059397   20518 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 13:42:23.080528   20518 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 13:42:23.101395   20518 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 13:42:23.122611   20518 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	W1216 13:42:23.164392   20518 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 13:42:23.165117   20518 config.go:182] Loaded profile config "download-only-152000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1216 13:42:23.165195   20518 start.go:810] api.Load failed for download-only-152000: filestore "download-only-152000": Docker machine "download-only-152000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1216 13:42:23.165357   20518 driver.go:392] Setting default libvirt URI to qemu:///system
	W1216 13:42:23.165395   20518 start.go:810] api.Load failed for download-only-152000: filestore "download-only-152000": Docker machine "download-only-152000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1216 13:42:23.223271   20518 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 13:42:23.223409   20518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:42:23.329769   20518 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:42:23.319414741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:42:23.350528   20518 out.go:97] Using the docker driver based on existing profile
	I1216 13:42:23.350561   20518 start.go:298] selected driver: docker
	I1216 13:42:23.350571   20518 start.go:902] validating driver "docker" against &{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-152000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:42:23.350839   20518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:42:23.456046   20518 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2023-12-16 21:42:23.446063103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:42:23.459373   20518 cni.go:84] Creating CNI manager for ""
	I1216 13:42:23.459401   20518 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 13:42:23.459415   20518 start_flags.go:323] config:
	{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-152000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs
:}
	I1216 13:42:23.480481   20518 out.go:97] Starting control plane node download-only-152000 in cluster download-only-152000
	I1216 13:42:23.480505   20518 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 13:42:23.501320   20518 out.go:97] Pulling base image v0.0.42-1702660877-17806 ...
	I1216 13:42:23.501377   20518 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1216 13:42:23.501441   20518 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local docker daemon
	I1216 13:42:23.553846   20518 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 to local cache
	I1216 13:42:23.554008   20518 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local cache directory
	I1216 13:42:23.554025   20518 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 in local cache directory, skipping pull
	I1216 13:42:23.554032   20518 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 exists in cache, skipping pull
	I1216 13:42:23.554039   20518 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 as a tarball
	I1216 13:42:23.556177   20518 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1216 13:42:23.556189   20518 cache.go:56] Caching tarball of preloaded images
	I1216 13:42:23.557321   20518 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1216 13:42:23.578493   20518 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1216 13:42:23.578536   20518 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:42:23.654759   20518 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1216 13:42:28.757058   20518 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:42:28.757659   20518 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1216 13:42:29.338967   20518 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1216 13:42:29.339052   20518 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/download-only-152000/config.json ...
	I1216 13:42:29.340129   20518 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1216 13:42:29.341818   20518 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17806-19996/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-152000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-152000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-342000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-342000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-342000
--- PASS: TestDownloadOnlyKic (2.09s)

                                                
                                    
x
+
TestBinaryMirror (1.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-664000 --alsologtostderr --binary-mirror http://127.0.0.1:55340 --driver=docker 
aaa_download_only_test.go:307: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-664000 --alsologtostderr --binary-mirror http://127.0.0.1:55340 --driver=docker : (1.068974258s)
helpers_test.go:175: Cleaning up "binary-mirror-664000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-664000
--- PASS: TestBinaryMirror (1.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-710000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-710000: exit status 85 (193.094692ms)

                                                
                                                
-- stdout --
	* Profile "addons-710000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-710000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-710000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-710000: exit status 85 (212.920566ms)

                                                
                                                
-- stdout --
	* Profile "addons-710000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-710000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (226.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-710000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-710000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m46.340589174s)
--- PASS: TestAddons/Setup (226.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gjc74" [86505c58-70aa-40f7-8348-cc59cce3942f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005490534s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-710000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-710000: (6.04435897s)
--- PASS: TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.895962ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-rt4w8" [4dacbda3-a218-4fcc-8445-4fa6fca23bf2] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005073678s
addons_test.go:414: (dbg) Run:  kubectl --context addons-710000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 7.037432ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-b2c9r" [0956d2a3-1621-4cf6-8346-74ac3408d461] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.184350625s
addons_test.go:472: (dbg) Run:  kubectl --context addons-710000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-710000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.048669219s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 15.575107ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-710000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-710000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [de6a3f9b-04e0-452c-82b1-22f9f99ee712] Pending
helpers_test.go:344: "task-pv-pod" [de6a3f9b-04e0-452c-82b1-22f9f99ee712] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [de6a3f9b-04e0-452c-82b1-22f9f99ee712] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.007367191s
addons_test.go:583: (dbg) Run:  kubectl --context addons-710000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-710000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-710000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-710000 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-710000 delete pod task-pv-pod: (1.389328903s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-710000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-710000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-710000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8f9eef75-2881-44a9-9a06-963716ff58c1] Pending
helpers_test.go:344: "task-pv-pod-restore" [8f9eef75-2881-44a9-9a06-963716ff58c1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8f9eef75-2881-44a9-9a06-963716ff58c1] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004344425s
addons_test.go:625: (dbg) Run:  kubectl --context addons-710000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-710000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-710000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-710000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.163334005s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-710000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-710000 --alsologtostderr -v=1: (1.50557258s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-qdhjs" [00f76b23-ec7b-4fd3-bfc9-b220b35567cd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-qdhjs" [00f76b23-ec7b-4fd3-bfc9-b220b35567cd] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.002927564s
--- PASS: TestAddons/parallel/Headlamp (14.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-ctzb2" [63fdec14-6251-470c-ba6b-7be4f1da8967] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003860314s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-710000
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-710000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-710000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9ee3e0c5-7490-40a6-b708-88c00a4f15a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9ee3e0c5-7490-40a6-b708-88c00a4f15a5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9ee3e0c5-7490-40a6-b708-88c00a4f15a5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004419844s
addons_test.go:890: (dbg) Run:  kubectl --context addons-710000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 ssh "cat /opt/local-path-provisioner/pvc-b61c33bb-d1b9-49d8-b206-888ea7a4fa82_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-710000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-710000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-710000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.202050971s)
--- PASS: TestAddons/parallel/LocalPath (54.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2hg7l" [0b74e559-519a-408d-b52c-8c964aa26e00] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00595584s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-710000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-710000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-710000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.86s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-710000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-710000: (11.119569301s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-710000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-710000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-710000
--- PASS: TestAddons/StoppedEnableDisable (11.86s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.69s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.69s)

                                                
                                    
x
+
TestErrorSpam/setup (22.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-895000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-895000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 --driver=docker : (22.514659114s)
--- PASS: TestErrorSpam/setup (22.51s)

                                                
                                    
x
+
TestErrorSpam/start (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 start --dry-run
--- PASS: TestErrorSpam/start (2.13s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (2.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 stop: (2.25360474s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-895000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-895000 stop
--- PASS: TestErrorSpam/stop (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17806-19996/.minikube/files/etc/test/nested/copy/20438/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-927000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-927000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.955751574s)
--- PASS: TestFunctional/serial/StartWithProxy (37.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-927000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-927000 --alsologtostderr -v=8: (37.837382944s)
functional_test.go:659: soft start took 37.837897559s for "functional-927000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-927000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 cache add registry.k8s.io/pause:3.1: (3.667745069s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 cache add registry.k8s.io/pause:3.3: (3.484420694s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 cache add registry.k8s.io/pause:latest: (2.513989145s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1827889116/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cache add minikube-local-cache-test:functional-927000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 cache add minikube-local-cache-test:functional-927000: (1.092270733s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cache delete minikube-local-cache-test:functional-927000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-927000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (407.399757ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 cache reload: (2.058046879s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 kubectl -- --context functional-927000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-927000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.79s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-927000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 13:51:56.719495   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:56.727232   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:56.738459   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:56.760591   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:56.801038   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:56.881725   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:57.042201   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:57.362538   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:58.002837   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:51:59.283239   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:52:01.844219   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
E1216 13:52:06.964423   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-927000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.129949873s)
functional_test.go:757: restart took 40.130082669s for "functional-927000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-927000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 logs: (3.082408403s)
--- PASS: TestFunctional/serial/LogsCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd945415652/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd945415652/001/logs.txt: (3.137677634s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-927000 apply -f testdata/invalidsvc.yaml
E1216 13:52:17.204920   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-927000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-927000: exit status 115 (582.747967ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32237 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-927000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 config get cpus: exit status 14 (64.769947ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 config get cpus: exit status 14 (64.014336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-927000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-927000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22738: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-927000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-927000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (778.371101ms)

                                                
                                                
-- stdout --
	* [functional-927000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 13:53:54.985671   22631 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:53:54.985922   22631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:53:54.985929   22631 out.go:309] Setting ErrFile to fd 2...
	I1216 13:53:54.985933   22631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:53:54.986131   22631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 13:53:54.988147   22631 out.go:303] Setting JSON to false
	I1216 13:53:55.019104   22631 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6804,"bootTime":1702756831,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 13:53:55.019238   22631 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 13:53:55.040537   22631 out.go:177] * [functional-927000] minikube v1.32.0 on Darwin 14.2
	I1216 13:53:55.104370   22631 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 13:53:55.082363   22631 notify.go:220] Checking for updates...
	I1216 13:53:55.148272   22631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 13:53:55.190440   22631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 13:53:55.232504   22631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 13:53:55.274295   22631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 13:53:55.316543   22631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 13:53:55.338208   22631 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 13:53:55.338965   22631 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 13:53:55.398821   22631 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 13:53:55.398976   22631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:53:55.511781   22631 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-16 21:53:55.501032378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:53:55.533549   22631 out.go:177] * Using the docker driver based on existing profile
	I1216 13:53:55.575450   22631 start.go:298] selected driver: docker
	I1216 13:53:55.575464   22631 start.go:902] validating driver "docker" against &{Name:functional-927000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-927000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:53:55.575540   22631 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 13:53:55.600547   22631 out.go:177] 
	W1216 13:53:55.621296   22631 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 13:53:55.642394   22631 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-927000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-927000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-927000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (643.547665ms)

                                                
                                                
-- stdout --
	* [functional-927000] minikube v1.32.0 sur Darwin 14.2
	  - MINIKUBE_LOCATION=17806
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 13:53:56.579005   22697 out.go:296] Setting OutFile to fd 1 ...
	I1216 13:53:56.579211   22697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:53:56.579216   22697 out.go:309] Setting ErrFile to fd 2...
	I1216 13:53:56.579220   22697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1216 13:53:56.579438   22697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
	I1216 13:53:56.580843   22697 out.go:303] Setting JSON to false
	I1216 13:53:56.603412   22697 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6805,"bootTime":1702756831,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1216 13:53:56.603507   22697 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1216 13:53:56.624511   22697 out.go:177] * [functional-927000] minikube v1.32.0 sur Darwin 14.2
	I1216 13:53:56.666485   22697 out.go:177]   - MINIKUBE_LOCATION=17806
	I1216 13:53:56.687554   22697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
	I1216 13:53:56.666566   22697 notify.go:220] Checking for updates...
	I1216 13:53:56.729365   22697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1216 13:53:56.752355   22697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 13:53:56.773417   22697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube
	I1216 13:53:56.815384   22697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 13:53:56.837276   22697 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1216 13:53:56.837950   22697 driver.go:392] Setting default libvirt URI to qemu:///system
	I1216 13:53:56.893876   22697 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1216 13:53:56.894027   22697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 13:53:57.001112   22697 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-16 21:53:56.991028633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221283328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1216 13:53:57.043540   22697 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1216 13:53:57.064450   22697 start.go:298] selected driver: docker
	I1216 13:53:57.064463   22697 start.go:902] validating driver "docker" against &{Name:functional-927000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702660877-17806@sha256:31c16f9c70521f16e226ed75c6ea29133eabeb392ac53c7ecccf20cb445890b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-927000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1216 13:53:57.064551   22697 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 13:53:57.089304   22697 out.go:177] 
	W1216 13:53:57.110366   22697 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 13:53:57.131305   22697 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6c1afc23-959b-4f0f-9964-67547fa7ee37] Running
E1216 13:53:18.646194   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004969131s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-927000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-927000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-927000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-927000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [01cffc46-2f54-407d-a996-e69d4082cbd0] Pending
helpers_test.go:344: "sp-pod" [01cffc46-2f54-407d-a996-e69d4082cbd0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [01cffc46-2f54-407d-a996-e69d4082cbd0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005478886s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-927000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-927000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-927000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d28cd13-0774-460e-ad43-911a0a5bac0c] Pending
helpers_test.go:344: "sp-pod" [2d28cd13-0774-460e-ad43-911a0a5bac0c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d28cd13-0774-460e-ad43-911a0a5bac0c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005580948s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-927000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh -n functional-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cp functional-927000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd1089621905/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh -n functional-927000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh -n functional-927000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-927000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-q968w" [6eeaa783-2408-4e30-a8fa-b46a157a1baa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-q968w" [6eeaa783-2408-4e30-a8fa-b46a157a1baa] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.005209924s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;": exit status 1 (242.821015ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;": exit status 1 (335.874814ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;": exit status 1 (171.291519ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-927000 exec mysql-859648c796-q968w -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/20438/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /etc/test/nested/copy/20438/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/20438.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /etc/ssl/certs/20438.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/20438.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /usr/share/ca-certificates/20438.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/204382.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /etc/ssl/certs/204382.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/204382.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /usr/share/ca-certificates/204382.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-927000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "sudo systemctl is-active crio": exit status 1 (615.010678ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-927000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-927000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-927000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-927000 image ls --format short --alsologtostderr:
I1216 13:54:13.148756   22880 out.go:296] Setting OutFile to fd 1 ...
I1216 13:54:13.149267   22880 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:13.149276   22880 out.go:309] Setting ErrFile to fd 2...
I1216 13:54:13.149288   22880 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:13.149510   22880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
I1216 13:54:13.150193   22880 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:13.150301   22880 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:13.150723   22880 cli_runner.go:164] Run: docker container inspect functional-927000 --format={{.State.Status}}
I1216 13:54:13.206409   22880 ssh_runner.go:195] Run: systemctl --version
I1216 13:54:13.206488   22880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927000
I1216 13:54:13.262343   22880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55985 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/functional-927000/id_rsa Username:docker}
I1216 13:54:13.357516   22880 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-927000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/localhost/my-image                | functional-927000 | ab71e6f91a9e1 | 1.24MB |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-927000 | ef6d11c0c7158 | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-927000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-927000 image ls --format table --alsologtostderr:
I1216 13:54:18.312813   22940 out.go:296] Setting OutFile to fd 1 ...
I1216 13:54:18.313126   22940 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:18.313132   22940 out.go:309] Setting ErrFile to fd 2...
I1216 13:54:18.313136   22940 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:18.313335   22940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
I1216 13:54:18.313981   22940 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:18.314079   22940 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:18.314545   22940 cli_runner.go:164] Run: docker container inspect functional-927000 --format={{.State.Status}}
I1216 13:54:18.368890   22940 ssh_runner.go:195] Run: systemctl --version
I1216 13:54:18.368964   22940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927000
I1216 13:54:18.422107   22940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55985 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/functional-927000/id_rsa Username:docker}
I1216 13:54:18.516735   22940 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-927000 image ls --format json --alsologtostderr:
[{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ab71e6f91a9e1fec2917caee49f469d1a0bcbd47436f74ba30ba37dc6f6bce3b","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-927000"],"size":"1240000"},{"id":"a6bd71
f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[]
,"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ef6d11c0c71581ab3fbb3d1aff87937b66e2e6f10d991a53f9a70ebc6825151e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-927000"],"size":"30"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-92
7000"],"size":"32900000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-927000 image ls --format json --alsologtostderr:
I1216 13:54:17.958508   22931 out.go:296] Setting OutFile to fd 1 ...
I1216 13:54:17.959185   22931 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:17.959198   22931 out.go:309] Setting ErrFile to fd 2...
I1216 13:54:17.959210   22931 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:17.959642   22931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
I1216 13:54:17.961166   22931 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:17.961376   22931 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:17.962584   22931 cli_runner.go:164] Run: docker container inspect functional-927000 --format={{.State.Status}}
I1216 13:54:18.036686   22931 ssh_runner.go:195] Run: systemctl --version
I1216 13:54:18.036768   22931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927000
I1216 13:54:18.090089   22931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55985 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/functional-927000/id_rsa Username:docker}
I1216 13:54:18.203990   22931 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-927000 image ls --format yaml --alsologtostderr:
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-927000
size: "32900000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: ef6d11c0c71581ab3fbb3d1aff87937b66e2e6f10d991a53f9a70ebc6825151e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-927000
size: "30"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-927000 image ls --format yaml --alsologtostderr:
I1216 13:54:13.463164   22886 out.go:296] Setting OutFile to fd 1 ...
I1216 13:54:13.463394   22886 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:13.463399   22886 out.go:309] Setting ErrFile to fd 2...
I1216 13:54:13.463403   22886 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:13.463595   22886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
I1216 13:54:13.464191   22886 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:13.464283   22886 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:13.464676   22886 cli_runner.go:164] Run: docker container inspect functional-927000 --format={{.State.Status}}
I1216 13:54:13.517601   22886 ssh_runner.go:195] Run: systemctl --version
I1216 13:54:13.517697   22886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927000
I1216 13:54:13.571582   22886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55985 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/functional-927000/id_rsa Username:docker}
I1216 13:54:13.665737   22886 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh pgrep buildkitd: exit status 1 (366.729931ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image build -t localhost/my-image:functional-927000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image build -t localhost/my-image:functional-927000 testdata/build --alsologtostderr: (3.480369829s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-927000 image build -t localhost/my-image:functional-927000 testdata/build --alsologtostderr:
I1216 13:54:14.135513   22902 out.go:296] Setting OutFile to fd 1 ...
I1216 13:54:14.136080   22902 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:14.136086   22902 out.go:309] Setting ErrFile to fd 2...
I1216 13:54:14.136090   22902 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1216 13:54:14.136294   22902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17806-19996/.minikube/bin
I1216 13:54:14.136975   22902 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:14.138364   22902 config.go:182] Loaded profile config "functional-927000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1216 13:54:14.138792   22902 cli_runner.go:164] Run: docker container inspect functional-927000 --format={{.State.Status}}
I1216 13:54:14.190495   22902 ssh_runner.go:195] Run: systemctl --version
I1216 13:54:14.190573   22902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927000
I1216 13:54:14.242127   22902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55985 SSHKeyPath:/Users/jenkins/minikube-integration/17806-19996/.minikube/machines/functional-927000/id_rsa Username:docker}
I1216 13:54:14.333488   22902 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2818263092.tar
I1216 13:54:14.334086   22902 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 13:54:14.343006   22902 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2818263092.tar
I1216 13:54:14.347022   22902 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2818263092.tar: stat -c "%s %y" /var/lib/minikube/build/build.2818263092.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2818263092.tar': No such file or directory
I1216 13:54:14.347051   22902 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2818263092.tar --> /var/lib/minikube/build/build.2818263092.tar (3072 bytes)
I1216 13:54:14.368557   22902 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2818263092
I1216 13:54:14.377325   22902 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2818263092 -xf /var/lib/minikube/build/build.2818263092.tar
I1216 13:54:14.386983   22902 docker.go:346] Building image: /var/lib/minikube/build/build.2818263092
I1216 13:54:14.387061   22902 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-927000 /var/lib/minikube/build/build.2818263092
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 2.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ab71e6f91a9e1fec2917caee49f469d1a0bcbd47436f74ba30ba37dc6f6bce3b done
#8 naming to localhost/my-image:functional-927000 done
#8 DONE 0.0s
I1216 13:54:17.512948   22902 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-927000 /var/lib/minikube/build/build.2818263092: (3.125828124s)
I1216 13:54:17.513025   22902 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2818263092
I1216 13:54:17.523515   22902 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2818263092.tar
I1216 13:54:17.532657   22902 build_images.go:207] Built localhost/my-image:functional-927000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2818263092.tar
I1216 13:54:17.532689   22902 build_images.go:123] succeeded building to: functional-927000
I1216 13:54:17.532693   22902 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.946220363s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-927000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-927000 docker-env) && out/minikube-darwin-amd64 status -p functional-927000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-927000 docker-env) && out/minikube-darwin-amd64 status -p functional-927000": (1.396995169s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-927000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image load --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image load --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr: (4.282144875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image load --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image load --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr: (2.297160177s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.686018562s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-927000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image load --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr
E1216 13:52:37.685369   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image load --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr: (4.969911084s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image save gcr.io/google-containers/addon-resizer:functional-927000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image save gcr.io/google-containers/addon-resizer:functional-927000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.090215705s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image rm gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.398875459s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-927000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 image save --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-927000 image save --daemon gcr.io/google-containers/addon-resizer:functional-927000 --alsologtostderr: (1.795835717s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-927000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-927000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-927000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-qrh98" [227c24c1-adf5-40a6-90d5-5b5945e1a56b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-qrh98" [227c24c1-adf5-40a6-90d5-5b5945e1a56b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.006894373s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-927000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-927000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-927000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-927000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 22400: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-927000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-927000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c74da99a-8c20-4846-8f5b-86bd34340e18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c74da99a-8c20-4846-8f5b-86bd34340e18] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.005915571s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 service list -o json
functional_test.go:1493: Took "663.303465ms" to run "out/minikube-darwin-amd64 -p functional-927000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 service --namespace=default --https --url hello-node: signal: killed (15.003067751s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56265

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:56265
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-927000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-927000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 22430: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 service hello-node --url --format={{.IP}}: signal: killed (15.003046682s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 service hello-node --url: signal: killed (15.003615819s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56309

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:56309
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "479.535992ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "92.844473ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "605.165814ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "81.090951ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2468346234/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2468346234/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2468346234/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T" /mount1: exit status 1 (526.080473ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-927000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2468346234/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2468346234/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2468346234/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-927000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-927000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-927000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-309000 --driver=docker 
E1216 13:54:40.567729   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-309000 --driver=docker : (21.505790845s)
--- PASS: TestImageBuild/serial/Setup (21.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-309000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-309000: (3.152230365s)
--- PASS: TestImageBuild/serial/NormalBuild (3.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-309000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-309000: (1.303040617s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-309000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-309000: (1.109020438s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-309000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-309000: (1.108049713s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-678000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-678000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (41.263414704s)
--- PASS: TestJSONOutput/start/Command (41.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-678000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-678000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-678000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-678000 --output=json --user=testUser: (10.881478703s)
--- PASS: TestJSONOutput/stop/Command (10.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-160000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-160000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (385.058459ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4fd3fb7c-56cd-4112-b16d-6ee21693cc68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-160000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99687266-5766-46e8-bac9-69bf6b79d7ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17806"}}
	{"specversion":"1.0","id":"1d5d6601-1d11-4489-87ff-03c9a0578a3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig"}}
	{"specversion":"1.0","id":"c2988ced-ed06-48ab-99e0-f05a9cc67eaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"22d454d0-01e7-43ef-81d9-04a12c4caee4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"777e7406-712b-46b0-9987-90c493f95aeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17806-19996/.minikube"}}
	{"specversion":"1.0","id":"344e242d-7126-4d79-996a-5431ac919645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36092c6e-8654-4bac-a9a6-60821a2e3caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-160000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-743000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-743000 --network=: (22.251698374s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-743000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-743000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-743000: (2.394116907s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-079000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-079000 --network=bridge: (21.140762287s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-079000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-079000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-079000: (2.066042151s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.26s)

                                                
                                    
x
+
TestKicExistingNetwork (24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-923000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-923000 --network=existing-network: (21.381540768s)
helpers_test.go:175: Cleaning up "existing-network-923000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-923000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-923000: (2.267961668s)
--- PASS: TestKicExistingNetwork (24.00s)

                                                
                                    
x
+
TestKicCustomSubnet (24.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-742000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-742000 --subnet=192.168.60.0/24: (21.690221591s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-742000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-742000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-742000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-742000: (2.405429773s)
--- PASS: TestKicCustomSubnet (24.15s)

                                                
                                    
x
+
TestKicStaticIP (23.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-546000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-546000 --static-ip=192.168.200.200: (20.909559323s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-546000 ip
helpers_test.go:175: Cleaning up "static-ip-546000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-546000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-546000: (2.41338399s)
--- PASS: TestKicStaticIP (23.55s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (50.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-975000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-975000 --driver=docker : (21.200098418s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-977000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-977000 --driver=docker : (22.366947308s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-975000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-977000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-977000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-977000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-977000: (2.398011829s)
helpers_test.go:175: Cleaning up "first-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-975000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-975000: (2.472984057s)
--- PASS: TestMinikubeProfile (50.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-367000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-367000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.178372482s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-367000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-381000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E1216 14:06:56.731182   20438 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17806-19996/.minikube/profiles/addons-710000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-381000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.416318506s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.42s)

                                                
                                    
x
+
TestPreload (161.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-898000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-898000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m35.697986582s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-898000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-898000 image pull gcr.io/k8s-minikube/busybox: (4.636761131s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-898000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-898000: (10.825349047s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-898000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-898000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (47.374030423s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-898000 image list
helpers_test.go:175: Cleaning up "test-preload-898000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-898000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-898000: (2.481016008s)
--- PASS: TestPreload (161.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17806
- KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current962145101/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current962145101/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current962145101/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current962145101/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17806
- KUBECONFIG=/Users/jenkins/minikube-integration/17806-19996/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current269799595/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current269799595/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current269799595/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current269799595/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.36s)

                                                
                                    

Test skip (21/191)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 16.085161ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bjlkt" [9f98d90a-f9c4-4bcf-885b-432a571327a3] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005292882s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g9rql" [73b8b664-7570-48c0-a341-3226aa929253] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004822091s
addons_test.go:339: (dbg) Run:  kubectl --context addons-710000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-710000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-710000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.832891268s)
addons_test.go:354: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-710000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-710000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-710000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d1c52b7d-bcf9-4725-b95b-1889fe712144] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d1c52b7d-bcf9-4725-b95b-1889fe712144] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00373609s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-710000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.32s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-927000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-927000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-77rxd" [99cc53c2-4c77-4bfe-9fda-edfa265744c8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-77rxd" [99cc53c2-4c77-4bfe-9fda-edfa265744c8] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003782009s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port684930229/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702763633791140000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port684930229/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702763633791140000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port684930229/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702763633791140000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port684930229/001/test-1702763633791140000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (491.622441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (566.100277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (557.575493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.493885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.106841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.480669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (421.705261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "sudo umount -f /mount-9p": exit status 1 (451.485408ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-amd64 -p functional-927000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port684930229/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port432757872/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (474.247784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (477.224524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.936678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/12/16 13:54:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (446.936134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (358.432066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (514.081593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.707966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-927000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-927000 ssh "sudo umount -f /mount-9p": exit status 1 (366.829394ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-927000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-927000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port432757872/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.39s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard