Test Report: Docker_macOS 17953

                    
                      eb30bbcea83871e91962f38accf20a5558557b42:2024-01-15:32709
                    
                

Test fail (24/197)

x
+
TestOffline (756.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-301000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-301000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m35.94512641s)

                                                
                                                
-- stdout --
	* [offline-docker-301000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-301000 in cluster offline-docker-301000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-301000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 06:12:22.784247   72225 out.go:296] Setting OutFile to fd 1 ...
	I0115 06:12:22.784596   72225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 06:12:22.784604   72225 out.go:309] Setting ErrFile to fd 2...
	I0115 06:12:22.784608   72225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 06:12:22.784790   72225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 06:12:22.786398   72225 out.go:303] Setting JSON to false
	I0115 06:12:22.810025   72225 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":34685,"bootTime":1705293257,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 06:12:22.810129   72225 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 06:12:22.831944   72225 out.go:177] * [offline-docker-301000] minikube v1.32.0 on Darwin 14.2.1
	I0115 06:12:22.873384   72225 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 06:12:22.873466   72225 notify.go:220] Checking for updates...
	I0115 06:12:22.915470   72225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 06:12:22.936305   72225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 06:12:22.957534   72225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 06:12:22.978507   72225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 06:12:22.999299   72225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 06:12:23.020743   72225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 06:12:23.077379   72225 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 06:12:23.077578   72225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 06:12:23.241235   72225 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:8 ContainersRunning:0 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:143 SystemTime:2024-01-15 14:12:23.062077828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 06:12:23.262871   72225 out.go:177] * Using the docker driver based on user configuration
	I0115 06:12:23.284034   72225 start.go:298] selected driver: docker
	I0115 06:12:23.284053   72225 start.go:902] validating driver "docker" against <nil>
	I0115 06:12:23.284066   72225 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 06:12:23.287000   72225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 06:12:23.387762   72225 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:8 ContainersRunning:0 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:143 SystemTime:2024-01-15 14:12:23.240732488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 06:12:23.387914   72225 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 06:12:23.388105   72225 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 06:12:23.409062   72225 out.go:177] * Using Docker Desktop driver with root privileges
	I0115 06:12:23.430087   72225 cni.go:84] Creating CNI manager for ""
	I0115 06:12:23.430131   72225 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0115 06:12:23.430150   72225 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 06:12:23.430180   72225 start_flags.go:321] config:
	{Name:offline-docker-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 06:12:23.473157   72225 out.go:177] * Starting control plane node offline-docker-301000 in cluster offline-docker-301000
	I0115 06:12:23.515113   72225 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 06:12:23.579093   72225 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 06:12:23.620716   72225 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 06:12:23.620761   72225 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0115 06:12:23.620771   72225 cache.go:56] Caching tarball of preloaded images
	I0115 06:12:23.620788   72225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 06:12:23.620903   72225 preload.go:174] Found /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 06:12:23.620913   72225 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0115 06:12:23.621787   72225 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/offline-docker-301000/config.json ...
	I0115 06:12:23.621866   72225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/offline-docker-301000/config.json: {Name:mkc6bf2c25fc53bb045b56218ed4b6f05326b9c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 06:12:23.672931   72225 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 06:12:23.673152   72225 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 06:12:23.673170   72225 cache.go:194] Successfully downloaded all kic artifacts
	I0115 06:12:23.673212   72225 start.go:365] acquiring machines lock for offline-docker-301000: {Name:mk777770c680d16be68699b00e5578abf6398a0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 06:12:23.673383   72225 start.go:369] acquired machines lock for "offline-docker-301000" in 159.978µs
	I0115 06:12:23.673409   72225 start.go:93] Provisioning new machine with config: &{Name:offline-docker-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-301000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0115 06:12:23.673484   72225 start.go:125] createHost starting for "" (driver="docker")
	I0115 06:12:23.694888   72225 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0115 06:12:23.695066   72225 start.go:159] libmachine.API.Create for "offline-docker-301000" (driver="docker")
	I0115 06:12:23.695096   72225 client.go:168] LocalClient.Create starting
	I0115 06:12:23.695228   72225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 06:12:23.695273   72225 main.go:141] libmachine: Decoding PEM data...
	I0115 06:12:23.695296   72225 main.go:141] libmachine: Parsing certificate...
	I0115 06:12:23.695372   72225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 06:12:23.695407   72225 main.go:141] libmachine: Decoding PEM data...
	I0115 06:12:23.695416   72225 main.go:141] libmachine: Parsing certificate...
	I0115 06:12:23.695927   72225 cli_runner.go:164] Run: docker network inspect offline-docker-301000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 06:12:23.767283   72225 cli_runner.go:211] docker network inspect offline-docker-301000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 06:12:23.767380   72225 network_create.go:281] running [docker network inspect offline-docker-301000] to gather additional debugging logs...
	I0115 06:12:23.767397   72225 cli_runner.go:164] Run: docker network inspect offline-docker-301000
	W0115 06:12:23.818944   72225 cli_runner.go:211] docker network inspect offline-docker-301000 returned with exit code 1
	I0115 06:12:23.818971   72225 network_create.go:284] error running [docker network inspect offline-docker-301000]: docker network inspect offline-docker-301000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-301000 not found
	I0115 06:12:23.818984   72225 network_create.go:286] output of [docker network inspect offline-docker-301000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-301000 not found
	
	** /stderr **
	I0115 06:12:23.819103   72225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 06:12:23.893799   72225 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:12:23.894171   72225 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022bc3b0}
	I0115 06:12:23.894188   72225 network_create.go:124] attempt to create docker network offline-docker-301000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0115 06:12:23.894262   72225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-301000 offline-docker-301000
	I0115 06:12:23.981689   72225 network_create.go:108] docker network offline-docker-301000 192.168.58.0/24 created
	I0115 06:12:23.981729   72225 kic.go:121] calculated static IP "192.168.58.2" for the "offline-docker-301000" container
	I0115 06:12:23.981865   72225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 06:12:24.034949   72225 cli_runner.go:164] Run: docker volume create offline-docker-301000 --label name.minikube.sigs.k8s.io=offline-docker-301000 --label created_by.minikube.sigs.k8s.io=true
	I0115 06:12:24.088268   72225 oci.go:103] Successfully created a docker volume offline-docker-301000
	I0115 06:12:24.088387   72225 cli_runner.go:164] Run: docker run --rm --name offline-docker-301000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-301000 --entrypoint /usr/bin/test -v offline-docker-301000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 06:12:24.907296   72225 oci.go:107] Successfully prepared a docker volume offline-docker-301000
	I0115 06:12:24.907331   72225 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 06:12:24.907345   72225 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 06:12:24.907447   72225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-301000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 06:18:23.683590   72225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 06:18:23.683690   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:23.736166   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:23.736272   72225 retry.go:31] will retry after 220.33958ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:23.958321   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:24.010942   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:24.011053   72225 retry.go:31] will retry after 189.636369ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:24.202969   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:24.257843   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:24.257939   72225 retry.go:31] will retry after 652.635853ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:24.912444   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:24.967449   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	W0115 06:18:24.967571   72225 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	
	W0115 06:18:24.967595   72225 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:24.967652   72225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 06:18:24.967712   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:25.018529   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:25.018647   72225 retry.go:31] will retry after 194.799042ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:25.215006   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:25.269306   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:25.269423   72225 retry.go:31] will retry after 561.481183ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:25.833321   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:25.887844   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:25.887943   72225 retry.go:31] will retry after 301.391128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:26.191700   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:26.245811   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:18:26.245911   72225 retry.go:31] will retry after 594.092863ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:26.841081   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:18:26.895339   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	W0115 06:18:26.895439   72225 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	
	W0115 06:18:26.895456   72225 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:26.895472   72225 start.go:128] duration metric: createHost completed in 6m3.23440049s
	I0115 06:18:26.895480   72225 start.go:83] releasing machines lock for "offline-docker-301000", held for 6m3.234511506s
	W0115 06:18:26.895493   72225 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I0115 06:18:26.895920   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:26.946721   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:26.946780   72225 delete.go:82] Unable to get host status for offline-docker-301000, assuming it has already been deleted: state: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	W0115 06:18:26.946847   72225 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0115 06:18:26.946858   72225 start.go:709] Will try again in 5 seconds ...
	I0115 06:18:31.949248   72225 start.go:365] acquiring machines lock for offline-docker-301000: {Name:mk777770c680d16be68699b00e5578abf6398a0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 06:18:31.949399   72225 start.go:369] acquired machines lock for "offline-docker-301000" in 115.274µs
	I0115 06:18:31.949430   72225 start.go:96] Skipping create...Using existing machine configuration
	I0115 06:18:31.949441   72225 fix.go:54] fixHost starting: 
	I0115 06:18:31.949791   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:32.002302   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:32.002348   72225 fix.go:102] recreateIfNeeded on offline-docker-301000: state= err=unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:32.002367   72225 fix.go:107] machineExists: false. err=machine does not exist
	I0115 06:18:32.025951   72225 out.go:177] * docker "offline-docker-301000" container is missing, will recreate.
	I0115 06:18:32.067671   72225 delete.go:124] DEMOLISHING offline-docker-301000 ...
	I0115 06:18:32.067819   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:32.119503   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	W0115 06:18:32.119575   72225 stop.go:75] unable to get state: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:32.119597   72225 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:32.119986   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:32.170303   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:32.170352   72225 delete.go:82] Unable to get host status for offline-docker-301000, assuming it has already been deleted: state: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:32.170426   72225 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-301000
	W0115 06:18:32.220907   72225 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-301000 returned with exit code 1
	I0115 06:18:32.220953   72225 kic.go:371] could not find the container offline-docker-301000 to remove it. will try anyways
	I0115 06:18:32.221035   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:32.271367   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	W0115 06:18:32.271414   72225 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:32.271492   72225 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-301000 /bin/bash -c "sudo init 0"
	W0115 06:18:32.322239   72225 cli_runner.go:211] docker exec --privileged -t offline-docker-301000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0115 06:18:32.322273   72225 oci.go:650] error shutdown offline-docker-301000: docker exec --privileged -t offline-docker-301000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:33.324440   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:33.380185   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:33.380236   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:33.380246   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:33.380272   72225 retry.go:31] will retry after 683.914082ms: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:34.065500   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:34.118665   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:34.118715   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:34.118724   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:34.118746   72225 retry.go:31] will retry after 476.752011ms: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:34.595805   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:34.650483   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:34.650536   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:34.650550   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:34.650575   72225 retry.go:31] will retry after 744.418658ms: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:35.396146   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:35.451233   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:35.451279   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:35.451292   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:35.451316   72225 retry.go:31] will retry after 1.827715145s: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:37.280227   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:37.336346   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:37.336401   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:37.336412   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:37.336441   72225 retry.go:31] will retry after 2.69210934s: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:40.030776   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:40.086541   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:40.086589   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:40.086598   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:40.086633   72225 retry.go:31] will retry after 4.431348079s: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:44.518427   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:44.572409   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:44.572459   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:44.572469   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:44.572495   72225 retry.go:31] will retry after 6.72502865s: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:51.298293   72225 cli_runner.go:164] Run: docker container inspect offline-docker-301000 --format={{.State.Status}}
	W0115 06:18:51.349592   72225 cli_runner.go:211] docker container inspect offline-docker-301000 --format={{.State.Status}} returned with exit code 1
	I0115 06:18:51.349641   72225 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:18:51.349654   72225 oci.go:664] temporary error: container offline-docker-301000 status is  but expect it to be exited
	I0115 06:18:51.349685   72225 oci.go:88] couldn't shut down offline-docker-301000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	 
	I0115 06:18:51.349754   72225 cli_runner.go:164] Run: docker rm -f -v offline-docker-301000
	I0115 06:18:51.401599   72225 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-301000
	W0115 06:18:51.453213   72225 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-301000 returned with exit code 1
	I0115 06:18:51.453334   72225 cli_runner.go:164] Run: docker network inspect offline-docker-301000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 06:18:51.504038   72225 cli_runner.go:164] Run: docker network rm offline-docker-301000
	I0115 06:18:51.605838   72225 fix.go:114] Sleeping 1 second for extra luck!
	I0115 06:18:52.605932   72225 start.go:125] createHost starting for "" (driver="docker")
	I0115 06:18:52.629272   72225 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0115 06:18:52.629421   72225 start.go:159] libmachine.API.Create for "offline-docker-301000" (driver="docker")
	I0115 06:18:52.629443   72225 client.go:168] LocalClient.Create starting
	I0115 06:18:52.629610   72225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 06:18:52.629687   72225 main.go:141] libmachine: Decoding PEM data...
	I0115 06:18:52.629707   72225 main.go:141] libmachine: Parsing certificate...
	I0115 06:18:52.629770   72225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 06:18:52.629823   72225 main.go:141] libmachine: Decoding PEM data...
	I0115 06:18:52.629835   72225 main.go:141] libmachine: Parsing certificate...
	I0115 06:18:52.630367   72225 cli_runner.go:164] Run: docker network inspect offline-docker-301000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 06:18:52.684422   72225 cli_runner.go:211] docker network inspect offline-docker-301000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 06:18:52.684517   72225 network_create.go:281] running [docker network inspect offline-docker-301000] to gather additional debugging logs...
	I0115 06:18:52.684532   72225 cli_runner.go:164] Run: docker network inspect offline-docker-301000
	W0115 06:18:52.735668   72225 cli_runner.go:211] docker network inspect offline-docker-301000 returned with exit code 1
	I0115 06:18:52.735696   72225 network_create.go:284] error running [docker network inspect offline-docker-301000]: docker network inspect offline-docker-301000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-301000 not found
	I0115 06:18:52.735711   72225 network_create.go:286] output of [docker network inspect offline-docker-301000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-301000 not found
	
	** /stderr **
	I0115 06:18:52.735863   72225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 06:18:52.803122   72225 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:18:52.804582   72225 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:18:52.804936   72225 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022bda30}
	I0115 06:18:52.804948   72225 network_create.go:124] attempt to create docker network offline-docker-301000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0115 06:18:52.805019   72225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-301000 offline-docker-301000
	W0115 06:18:52.856327   72225 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-301000 offline-docker-301000 returned with exit code 1
	W0115 06:18:52.856369   72225 network_create.go:149] failed to create docker network offline-docker-301000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-301000 offline-docker-301000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0115 06:18:52.856385   72225 network_create.go:116] failed to create docker network offline-docker-301000 192.168.67.0/24, will retry: subnet is taken
	I0115 06:18:52.857767   72225 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:18:52.858125   72225 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020ee510}
	I0115 06:18:52.858137   72225 network_create.go:124] attempt to create docker network offline-docker-301000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0115 06:18:52.858205   72225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-301000 offline-docker-301000
	I0115 06:18:52.944706   72225 network_create.go:108] docker network offline-docker-301000 192.168.76.0/24 created
	I0115 06:18:52.944746   72225 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-301000" container
	I0115 06:18:52.944894   72225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 06:18:52.998249   72225 cli_runner.go:164] Run: docker volume create offline-docker-301000 --label name.minikube.sigs.k8s.io=offline-docker-301000 --label created_by.minikube.sigs.k8s.io=true
	I0115 06:18:53.048757   72225 oci.go:103] Successfully created a docker volume offline-docker-301000
	I0115 06:18:53.048897   72225 cli_runner.go:164] Run: docker run --rm --name offline-docker-301000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-301000 --entrypoint /usr/bin/test -v offline-docker-301000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 06:18:53.356418   72225 oci.go:107] Successfully prepared a docker volume offline-docker-301000
	I0115 06:18:53.356472   72225 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 06:18:53.356484   72225 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 06:18:53.356599   72225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-301000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 06:24:52.617693   72225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 06:24:52.617821   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:52.673190   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:52.673307   72225 retry.go:31] will retry after 184.770323ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:52.860438   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:52.911824   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:52.911923   72225 retry.go:31] will retry after 511.501536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:53.423688   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:53.474872   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:53.474972   72225 retry.go:31] will retry after 353.601996ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:53.830074   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:53.884363   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	W0115 06:24:53.884480   72225 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	
	W0115 06:24:53.884506   72225 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:53.884567   72225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 06:24:53.884652   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:53.935133   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:53.935236   72225 retry.go:31] will retry after 265.436168ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:54.202962   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:54.256848   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:54.256949   72225 retry.go:31] will retry after 466.68895ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:54.725996   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:54.780647   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:54.780760   72225 retry.go:31] will retry after 732.23224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:55.513434   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:55.567828   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	W0115 06:24:55.567926   72225 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	
	W0115 06:24:55.567961   72225 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:55.567977   72225 start.go:128] duration metric: createHost completed in 6m2.974375379s
	I0115 06:24:55.568040   72225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 06:24:55.568112   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:55.619332   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:55.619433   72225 retry.go:31] will retry after 232.506422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:55.853189   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:55.907372   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:55.907482   72225 retry.go:31] will retry after 454.792155ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:56.363073   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:56.415819   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:56.415909   72225 retry.go:31] will retry after 580.083423ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:56.996611   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:57.051810   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	W0115 06:24:57.051910   72225 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	
	W0115 06:24:57.051928   72225 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:57.051994   72225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 06:24:57.052048   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:57.103923   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:57.104018   72225 retry.go:31] will retry after 344.361076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:57.450700   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:57.503413   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:57.503506   72225 retry.go:31] will retry after 295.366774ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:57.799336   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:57.854774   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	I0115 06:24:57.854864   72225 retry.go:31] will retry after 585.936228ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:58.443110   72225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000
	W0115 06:24:58.496886   72225 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000 returned with exit code 1
	W0115 06:24:58.496994   72225 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	
	W0115 06:24:58.497020   72225 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-301000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-301000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000
	I0115 06:24:58.497033   72225 fix.go:56] fixHost completed within 6m26.560789767s
	I0115 06:24:58.497039   72225 start.go:83] releasing machines lock for "offline-docker-301000", held for 6m26.560826808s
	W0115 06:24:58.497113   72225 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-301000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-301000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0115 06:24:58.540610   72225 out.go:177] 
	W0115 06:24:58.562303   72225 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0115 06:24:58.562341   72225 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0115 06:24:58.562357   72225 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0115 06:24:58.583492   72225 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-301000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2024-01-15 06:24:58.659929 -0800 PST m=+4994.761572276
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-301000
helpers_test.go:235: (dbg) docker inspect offline-docker-301000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-301000",
	        "Id": "8ca869fe5d44da08c8e16299a0cd035694d96c15d834808e6216a158dc4e25e1",
	        "Created": "2024-01-15T14:18:52.906991768Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-301000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-301000 -n offline-docker-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-301000 -n offline-docker-301000: exit status 7 (109.778251ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 06:24:58.822726   73086 status.go:249] status error: host: state: unknown state "offline-docker-301000": docker container inspect offline-docker-301000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-301000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-301000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-301000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-301000
--- FAIL: TestOffline (756.86s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (257.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-482000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0115 05:14:54.017946   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:15:21.726027   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:15:56.754375   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:56.760658   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:56.771965   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:56.794195   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:56.835561   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:56.917803   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:57.080000   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:57.401068   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:58.041414   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:15:59.322232   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:16:01.883936   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:16:07.003890   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:16:17.243837   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:16:37.723699   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:17:18.682221   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-482000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m17.666617229s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-482000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-482000 in cluster ingress-addon-legacy-482000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:13:17.523696   68490 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:13:17.523896   68490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:13:17.523901   68490 out.go:309] Setting ErrFile to fd 2...
	I0115 05:13:17.523905   68490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:13:17.524085   68490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:13:17.525569   68490 out.go:303] Setting JSON to false
	I0115 05:13:17.547961   68490 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":31140,"bootTime":1705293257,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:13:17.548117   68490 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:13:17.570030   68490 out.go:177] * [ingress-addon-legacy-482000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:13:17.613776   68490 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 05:13:17.613831   68490 notify.go:220] Checking for updates...
	I0115 05:13:17.636003   68490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:13:17.657817   68490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:13:17.679522   68490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:13:17.700758   68490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 05:13:17.722801   68490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 05:13:17.745024   68490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:13:17.801556   68490 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:13:17.801719   68490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:13:17.906237   68490 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-15 13:13:17.896795038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:13:17.927993   68490 out.go:177] * Using the docker driver based on user configuration
	I0115 05:13:17.970601   68490 start.go:298] selected driver: docker
	I0115 05:13:17.970624   68490 start.go:902] validating driver "docker" against <nil>
	I0115 05:13:17.970637   68490 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 05:13:17.974535   68490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:13:18.078475   68490 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-15 13:13:18.070483754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:13:18.078657   68490 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 05:13:18.078844   68490 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 05:13:18.100712   68490 out.go:177] * Using Docker Desktop driver with root privileges
	I0115 05:13:18.121883   68490 cni.go:84] Creating CNI manager for ""
	I0115 05:13:18.121922   68490 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0115 05:13:18.121942   68490 start_flags.go:321] config:
	{Name:ingress-addon-legacy-482000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-482000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:13:18.165694   68490 out.go:177] * Starting control plane node ingress-addon-legacy-482000 in cluster ingress-addon-legacy-482000
	I0115 05:13:18.187576   68490 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:13:18.230702   68490 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:13:18.252655   68490 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0115 05:13:18.252715   68490 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:13:18.306994   68490 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 05:13:18.307017   68490 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 05:13:18.316110   68490 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0115 05:13:18.316130   68490 cache.go:56] Caching tarball of preloaded images
	I0115 05:13:18.316383   68490 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0115 05:13:18.337593   68490 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0115 05:13:18.379675   68490 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:13:18.461516   68490 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0115 05:13:21.376100   68490 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:13:21.376286   68490 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:13:22.007348   68490 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0115 05:13:22.007602   68490 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/config.json ...
	I0115 05:13:22.007628   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/config.json: {Name:mk25d800f452f25dcdbf3f13f03dc1f97733d386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:22.007921   68490 cache.go:194] Successfully downloaded all kic artifacts
	I0115 05:13:22.007951   68490 start.go:365] acquiring machines lock for ingress-addon-legacy-482000: {Name:mk0930bd0b28507c16b010a48d1c10ae87dd3dc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 05:13:22.008057   68490 start.go:369] acquired machines lock for "ingress-addon-legacy-482000" in 98.221µs
	I0115 05:13:22.008078   68490 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-482000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-482000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0115 05:13:22.008118   68490 start.go:125] createHost starting for "" (driver="docker")
	I0115 05:13:22.059365   68490 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0115 05:13:22.059682   68490 start.go:159] libmachine.API.Create for "ingress-addon-legacy-482000" (driver="docker")
	I0115 05:13:22.059733   68490 client.go:168] LocalClient.Create starting
	I0115 05:13:22.059926   68490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 05:13:22.059997   68490 main.go:141] libmachine: Decoding PEM data...
	I0115 05:13:22.060020   68490 main.go:141] libmachine: Parsing certificate...
	I0115 05:13:22.060081   68490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 05:13:22.060132   68490 main.go:141] libmachine: Decoding PEM data...
	I0115 05:13:22.060151   68490 main.go:141] libmachine: Parsing certificate...
	I0115 05:13:22.060804   68490 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-482000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 05:13:22.112543   68490 cli_runner.go:211] docker network inspect ingress-addon-legacy-482000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 05:13:22.112661   68490 network_create.go:281] running [docker network inspect ingress-addon-legacy-482000] to gather additional debugging logs...
	I0115 05:13:22.112679   68490 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-482000
	W0115 05:13:22.163159   68490 cli_runner.go:211] docker network inspect ingress-addon-legacy-482000 returned with exit code 1
	I0115 05:13:22.163195   68490 network_create.go:284] error running [docker network inspect ingress-addon-legacy-482000]: docker network inspect ingress-addon-legacy-482000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-482000 not found
	I0115 05:13:22.163212   68490 network_create.go:286] output of [docker network inspect ingress-addon-legacy-482000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-482000 not found
	
	** /stderr **
	I0115 05:13:22.163351   68490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:13:22.214750   68490 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013230}
	I0115 05:13:22.214797   68490 network_create.go:124] attempt to create docker network ingress-addon-legacy-482000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0115 05:13:22.214870   68490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-482000 ingress-addon-legacy-482000
	I0115 05:13:22.302368   68490 network_create.go:108] docker network ingress-addon-legacy-482000 192.168.49.0/24 created
	I0115 05:13:22.302417   68490 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-482000" container
	I0115 05:13:22.302558   68490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 05:13:22.356821   68490 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-482000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-482000 --label created_by.minikube.sigs.k8s.io=true
	I0115 05:13:22.409370   68490 oci.go:103] Successfully created a docker volume ingress-addon-legacy-482000
	I0115 05:13:22.409526   68490 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-482000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-482000 --entrypoint /usr/bin/test -v ingress-addon-legacy-482000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 05:13:22.809037   68490 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-482000
	I0115 05:13:22.809074   68490 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0115 05:13:22.809089   68490 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 05:13:22.809204   68490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-482000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 05:13:25.373614   68490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-482000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.56448383s)
	I0115 05:13:25.373648   68490 kic.go:203] duration metric: took 2.564699 seconds to extract preloaded images to volume
	I0115 05:13:25.373761   68490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 05:13:25.476310   68490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-482000 --name ingress-addon-legacy-482000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-482000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-482000 --network ingress-addon-legacy-482000 --ip 192.168.49.2 --volume ingress-addon-legacy-482000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 05:13:25.746892   68490 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Running}}
	I0115 05:13:25.804901   68490 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:13:25.865308   68490 cli_runner.go:164] Run: docker exec ingress-addon-legacy-482000 stat /var/lib/dpkg/alternatives/iptables
	I0115 05:13:25.999636   68490 oci.go:144] the created container "ingress-addon-legacy-482000" has a running status.
	I0115 05:13:25.999685   68490 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa...
	I0115 05:13:26.298580   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 05:13:26.298644   68490 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 05:13:26.364308   68490 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:13:26.418678   68490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 05:13:26.418702   68490 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-482000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 05:13:26.515333   68490 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:13:26.567606   68490 machine.go:88] provisioning docker machine ...
	I0115 05:13:26.567670   68490 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-482000"
	I0115 05:13:26.567812   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:26.622123   68490 main.go:141] libmachine: Using SSH client type: native
	I0115 05:13:26.622486   68490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14075e0] 0x140a2c0 <nil>  [] 0s} 127.0.0.1 54699 <nil> <nil>}
	I0115 05:13:26.622503   68490 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-482000 && echo "ingress-addon-legacy-482000" | sudo tee /etc/hostname
	I0115 05:13:26.768147   68490 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-482000
	
	I0115 05:13:26.768244   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:26.820241   68490 main.go:141] libmachine: Using SSH client type: native
	I0115 05:13:26.820555   68490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14075e0] 0x140a2c0 <nil>  [] 0s} 127.0.0.1 54699 <nil> <nil>}
	I0115 05:13:26.820576   68490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-482000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-482000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-482000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 05:13:26.957781   68490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 05:13:26.957803   68490 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17953-64881/.minikube CaCertPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17953-64881/.minikube}
	I0115 05:13:26.957826   68490 ubuntu.go:177] setting up certificates
	I0115 05:13:26.957845   68490 provision.go:83] configureAuth start
	I0115 05:13:26.957926   68490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-482000
	I0115 05:13:27.008737   68490 provision.go:138] copyHostCerts
	I0115 05:13:27.008804   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.pem
	I0115 05:13:27.008855   68490 exec_runner.go:144] found /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.pem, removing ...
	I0115 05:13:27.008863   68490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.pem
	I0115 05:13:27.008971   68490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.pem (1082 bytes)
	I0115 05:13:27.009151   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cert.pem
	I0115 05:13:27.009188   68490 exec_runner.go:144] found /Users/jenkins/minikube-integration/17953-64881/.minikube/cert.pem, removing ...
	I0115 05:13:27.009193   68490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17953-64881/.minikube/cert.pem
	I0115 05:13:27.009277   68490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17953-64881/.minikube/cert.pem (1123 bytes)
	I0115 05:13:27.009424   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17953-64881/.minikube/key.pem
	I0115 05:13:27.009461   68490 exec_runner.go:144] found /Users/jenkins/minikube-integration/17953-64881/.minikube/key.pem, removing ...
	I0115 05:13:27.009472   68490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17953-64881/.minikube/key.pem
	I0115 05:13:27.009547   68490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17953-64881/.minikube/key.pem (1675 bytes)
	I0115 05:13:27.009689   68490 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-482000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-482000]
	I0115 05:13:27.129693   68490 provision.go:172] copyRemoteCerts
	I0115 05:13:27.129750   68490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 05:13:27.129804   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:27.181589   68490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:13:27.275665   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 05:13:27.275743   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 05:13:27.295400   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 05:13:27.295467   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 05:13:27.315293   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 05:13:27.315374   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 05:13:27.335495   68490 provision.go:86] duration metric: configureAuth took 377.654996ms
	I0115 05:13:27.335513   68490 ubuntu.go:193] setting minikube options for container-runtime
	I0115 05:13:27.335655   68490 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:13:27.335724   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:27.387241   68490 main.go:141] libmachine: Using SSH client type: native
	I0115 05:13:27.387597   68490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14075e0] 0x140a2c0 <nil>  [] 0s} 127.0.0.1 54699 <nil> <nil>}
	I0115 05:13:27.387613   68490 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0115 05:13:27.521855   68490 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0115 05:13:27.521873   68490 ubuntu.go:71] root file system type: overlay
	I0115 05:13:27.521960   68490 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0115 05:13:27.522043   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:27.574157   68490 main.go:141] libmachine: Using SSH client type: native
	I0115 05:13:27.574495   68490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14075e0] 0x140a2c0 <nil>  [] 0s} 127.0.0.1 54699 <nil> <nil>}
	I0115 05:13:27.574552   68490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0115 05:13:27.717276   68490 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0115 05:13:27.717388   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:27.769354   68490 main.go:141] libmachine: Using SSH client type: native
	I0115 05:13:27.769666   68490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14075e0] 0x140a2c0 <nil>  [] 0s} 127.0.0.1 54699 <nil> <nil>}
	I0115 05:13:27.769679   68490 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0115 05:13:28.349239   68490 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-15 13:13:27.714916959 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0115 05:13:28.349263   68490 machine.go:91] provisioned docker machine in 1.781711736s
	I0115 05:13:28.349270   68490 client.go:171] LocalClient.Create took 6.289873831s
	I0115 05:13:28.349318   68490 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-482000" took 6.289982776s
	I0115 05:13:28.349335   68490 start.go:300] post-start starting for "ingress-addon-legacy-482000" (driver="docker")
	I0115 05:13:28.349359   68490 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 05:13:28.349441   68490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 05:13:28.349565   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:28.404401   68490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:13:28.499611   68490 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 05:13:28.503660   68490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 05:13:28.503684   68490 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 05:13:28.503691   68490 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 05:13:28.503699   68490 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 05:13:28.503709   68490 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17953-64881/.minikube/addons for local assets ...
	I0115 05:13:28.503818   68490 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17953-64881/.minikube/files for local assets ...
	I0115 05:13:28.504003   68490 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/656302.pem -> 656302.pem in /etc/ssl/certs
	I0115 05:13:28.504009   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/656302.pem -> /etc/ssl/certs/656302.pem
	I0115 05:13:28.504217   68490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 05:13:28.512139   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/656302.pem --> /etc/ssl/certs/656302.pem (1708 bytes)
	I0115 05:13:28.532653   68490 start.go:303] post-start completed in 183.315151ms
	I0115 05:13:28.533237   68490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-482000
	I0115 05:13:28.586073   68490 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/config.json ...
	I0115 05:13:28.586563   68490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:13:28.586629   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:28.638438   68490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:13:28.731024   68490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:13:28.735739   68490 start.go:128] duration metric: createHost completed in 6.727974982s
	I0115 05:13:28.735758   68490 start.go:83] releasing machines lock for "ingress-addon-legacy-482000", held for 6.728063648s
	I0115 05:13:28.735851   68490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-482000
	I0115 05:13:28.786941   68490 ssh_runner.go:195] Run: cat /version.json
	I0115 05:13:28.786955   68490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 05:13:28.787019   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:28.787033   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:28.848291   68490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:13:28.848265   68490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:13:29.049864   68490 ssh_runner.go:195] Run: systemctl --version
	I0115 05:13:29.054708   68490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 05:13:29.059652   68490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0115 05:13:29.081190   68490 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0115 05:13:29.081282   68490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0115 05:13:29.096252   68490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0115 05:13:29.110975   68490 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 05:13:29.110994   68490 start.go:475] detecting cgroup driver to use...
	I0115 05:13:29.111009   68490 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 05:13:29.111127   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 05:13:29.125845   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0115 05:13:29.135228   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 05:13:29.144625   68490 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 05:13:29.144686   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 05:13:29.153815   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 05:13:29.163223   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 05:13:29.172581   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 05:13:29.181909   68490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 05:13:29.190860   68490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 05:13:29.199976   68490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 05:13:29.208080   68490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 05:13:29.216167   68490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 05:13:29.265335   68490 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 05:13:29.341241   68490 start.go:475] detecting cgroup driver to use...
	I0115 05:13:29.341278   68490 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 05:13:29.341355   68490 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0115 05:13:29.353715   68490 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0115 05:13:29.353790   68490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 05:13:29.365211   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 05:13:29.382257   68490 ssh_runner.go:195] Run: which cri-dockerd
	I0115 05:13:29.386950   68490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0115 05:13:29.396739   68490 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0115 05:13:29.415157   68490 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0115 05:13:29.508344   68490 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0115 05:13:29.595964   68490 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0115 05:13:29.596114   68490 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0115 05:13:29.613130   68490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 05:13:29.668060   68490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0115 05:13:29.987672   68490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0115 05:13:30.011499   68490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0115 05:13:30.077738   68490 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0115 05:13:30.077894   68490 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-482000 dig +short host.docker.internal
	I0115 05:13:30.191378   68490 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0115 05:13:30.191494   68490 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0115 05:13:30.196036   68490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 05:13:30.206585   68490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:13:30.259522   68490 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0115 05:13:30.259601   68490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0115 05:13:30.277922   68490 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0115 05:13:30.277938   68490 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0115 05:13:30.278012   68490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0115 05:13:30.286809   68490 ssh_runner.go:195] Run: which lz4
	I0115 05:13:30.291092   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 05:13:30.291209   68490 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0115 05:13:30.295173   68490 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 05:13:30.295196   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0115 05:13:35.860514   68490 docker.go:649] Took 5.569658 seconds to copy over tarball
	I0115 05:13:35.860596   68490 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 05:13:37.512914   68490 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.652345447s)
	I0115 05:13:37.512948   68490 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 05:13:37.558894   68490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0115 05:13:37.567647   68490 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0115 05:13:37.582829   68490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 05:13:37.633169   68490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0115 05:13:38.699392   68490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.06626346s)
	I0115 05:13:38.699496   68490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0115 05:13:38.717697   68490 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0115 05:13:38.717714   68490 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0115 05:13:38.717725   68490 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 05:13:38.723231   68490 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 05:13:38.724730   68490 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 05:13:38.724821   68490 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 05:13:38.725231   68490 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0115 05:13:38.725334   68490 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 05:13:38.725383   68490 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 05:13:38.725476   68490 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0115 05:13:38.725520   68490 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0115 05:13:38.733050   68490 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 05:13:38.733499   68490 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0115 05:13:38.735375   68490 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 05:13:38.735377   68490 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 05:13:38.735401   68490 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0115 05:13:38.735504   68490 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 05:13:38.735586   68490 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0115 05:13:38.735636   68490 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 05:13:39.175786   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0115 05:13:39.183284   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0115 05:13:39.199822   68490 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0115 05:13:39.199933   68490 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0115 05:13:39.200041   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0115 05:13:39.206903   68490 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0115 05:13:39.206963   68490 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 05:13:39.207056   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0115 05:13:39.223298   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0115 05:13:39.228994   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0115 05:13:39.237719   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 05:13:39.256010   68490 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0115 05:13:39.256047   68490 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 05:13:39.256107   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 05:13:39.275869   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0115 05:13:39.312429   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0115 05:13:39.331833   68490 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0115 05:13:39.331863   68490 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0115 05:13:39.331937   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0115 05:13:39.350209   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0115 05:13:39.366916   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0115 05:13:39.386076   68490 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0115 05:13:39.386100   68490 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0115 05:13:39.386164   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0115 05:13:39.403557   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0115 05:13:39.436460   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0115 05:13:39.455940   68490 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0115 05:13:39.455966   68490 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 05:13:39.456035   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0115 05:13:39.476163   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0115 05:13:39.510542   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 05:13:39.525063   68490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0115 05:13:39.546194   68490 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0115 05:13:39.546222   68490 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 05:13:39.546291   68490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0115 05:13:39.564001   68490 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0115 05:13:39.564046   68490 cache_images.go:92] LoadImages completed in 846.355343ms
	W0115 05:13:39.564097   68490 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0115 05:13:39.564170   68490 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0115 05:13:39.611506   68490 cni.go:84] Creating CNI manager for ""
	I0115 05:13:39.611524   68490 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0115 05:13:39.611539   68490 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 05:13:39.611555   68490 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-482000 NodeName:ingress-addon-legacy-482000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 05:13:39.611656   68490 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-482000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 05:13:39.611743   68490 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-482000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-482000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 05:13:39.611822   68490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0115 05:13:39.620476   68490 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 05:13:39.620547   68490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 05:13:39.628698   68490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0115 05:13:39.644176   68490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0115 05:13:39.660198   68490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0115 05:13:39.676072   68490 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 05:13:39.680437   68490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 05:13:39.690632   68490 certs.go:56] Setting up /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000 for IP: 192.168.49.2
	I0115 05:13:39.690651   68490 certs.go:190] acquiring lock for shared ca certs: {Name:mk6c13ac30e7b90f60feff8c8c9de7894c05f68c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.690869   68490 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.key
	I0115 05:13:39.690952   68490 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17953-64881/.minikube/proxy-client-ca.key
	I0115 05:13:39.691004   68490 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/client.key
	I0115 05:13:39.691021   68490 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/client.crt with IP's: []
	I0115 05:13:39.791494   68490 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/client.crt ...
	I0115 05:13:39.791507   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/client.crt: {Name:mkec7f250db9d306b935715f1b6906d417032968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.791867   68490 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/client.key ...
	I0115 05:13:39.791877   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/client.key: {Name:mk10e8cf240ecb2db2c2a535cdc22006e4c68b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.792147   68490 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key.dd3b5fb2
	I0115 05:13:39.792163   68490 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 05:13:39.886100   68490 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt.dd3b5fb2 ...
	I0115 05:13:39.886109   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt.dd3b5fb2: {Name:mk503b842c6fe81bcaa07da5a286cac0cd30d6c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.886351   68490 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key.dd3b5fb2 ...
	I0115 05:13:39.886359   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key.dd3b5fb2: {Name:mkcdd6a7dab40037ce525f2f116e7b974f5425b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.886562   68490 certs.go:337] copying /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt
	I0115 05:13:39.886739   68490 certs.go:341] copying /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key
	I0115 05:13:39.886922   68490 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.key
	I0115 05:13:39.886935   68490 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.crt with IP's: []
	I0115 05:13:39.955402   68490 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.crt ...
	I0115 05:13:39.955411   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.crt: {Name:mk15271420b452085c597a55df0aee8291d34b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.955639   68490 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.key ...
	I0115 05:13:39.955648   68490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.key: {Name:mkdb948c00b256404f2164e7a349d9b705989283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:13:39.955845   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 05:13:39.955870   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 05:13:39.955891   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 05:13:39.955933   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 05:13:39.955951   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 05:13:39.955969   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 05:13:39.955987   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 05:13:39.956003   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 05:13:39.956111   68490 certs.go:437] found cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/65630.pem (1338 bytes)
	W0115 05:13:39.956157   68490 certs.go:433] ignoring /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/65630_empty.pem, impossibly tiny 0 bytes
	I0115 05:13:39.956166   68490 certs.go:437] found cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 05:13:39.956197   68490 certs.go:437] found cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem (1082 bytes)
	I0115 05:13:39.956231   68490 certs.go:437] found cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem (1123 bytes)
	I0115 05:13:39.956263   68490 certs.go:437] found cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/certs/key.pem (1675 bytes)
	I0115 05:13:39.956331   68490 certs.go:437] found cert: /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/656302.pem (1708 bytes)
	I0115 05:13:39.956364   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 05:13:39.956384   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/65630.pem -> /usr/share/ca-certificates/65630.pem
	I0115 05:13:39.956401   68490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/656302.pem -> /usr/share/ca-certificates/656302.pem
	I0115 05:13:39.956848   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 05:13:39.978775   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 05:13:39.999345   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 05:13:40.020320   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/ingress-addon-legacy-482000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 05:13:40.041114   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 05:13:40.062075   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 05:13:40.082643   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 05:13:40.103303   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 05:13:40.124503   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 05:13:40.145143   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/65630.pem --> /usr/share/ca-certificates/65630.pem (1338 bytes)
	I0115 05:13:40.166357   68490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/ssl/certs/656302.pem --> /usr/share/ca-certificates/656302.pem (1708 bytes)
	I0115 05:13:40.187022   68490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 05:13:40.202603   68490 ssh_runner.go:195] Run: openssl version
	I0115 05:13:40.208489   68490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 05:13:40.217600   68490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 05:13:40.221987   68490 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0115 05:13:40.222048   68490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 05:13:40.228432   68490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 05:13:40.237532   68490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65630.pem && ln -fs /usr/share/ca-certificates/65630.pem /etc/ssl/certs/65630.pem"
	I0115 05:13:40.246512   68490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65630.pem
	I0115 05:13:40.250910   68490 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 13:07 /usr/share/ca-certificates/65630.pem
	I0115 05:13:40.250959   68490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65630.pem
	I0115 05:13:40.257873   68490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65630.pem /etc/ssl/certs/51391683.0"
	I0115 05:13:40.267015   68490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/656302.pem && ln -fs /usr/share/ca-certificates/656302.pem /etc/ssl/certs/656302.pem"
	I0115 05:13:40.276377   68490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/656302.pem
	I0115 05:13:40.280364   68490 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 13:07 /usr/share/ca-certificates/656302.pem
	I0115 05:13:40.280408   68490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/656302.pem
	I0115 05:13:40.286843   68490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/656302.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 05:13:40.296126   68490 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 05:13:40.300340   68490 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 05:13:40.300386   68490 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-482000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-482000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:13:40.300482   68490 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0115 05:13:40.319453   68490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 05:13:40.328220   68490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 05:13:40.336415   68490 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 05:13:40.336477   68490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 05:13:40.344784   68490 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 05:13:40.344856   68490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 05:13:40.398559   68490 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0115 05:13:40.398649   68490 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 05:13:40.643731   68490 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 05:13:40.643810   68490 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 05:13:40.643882   68490 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 05:13:40.809008   68490 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 05:13:40.809772   68490 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 05:13:40.809806   68490 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 05:13:40.878341   68490 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 05:13:40.923678   68490 out.go:204]   - Generating certificates and keys ...
	I0115 05:13:40.923749   68490 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 05:13:40.923844   68490 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 05:13:41.227800   68490 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 05:13:41.350653   68490 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 05:13:41.397161   68490 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 05:13:41.554434   68490 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 05:13:41.661300   68490 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 05:13:41.661505   68490 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-482000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 05:13:41.742690   68490 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 05:13:41.742869   68490 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-482000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 05:13:41.941844   68490 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 05:13:42.098476   68490 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 05:13:42.240757   68490 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 05:13:42.240814   68490 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 05:13:42.538308   68490 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 05:13:42.686341   68490 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 05:13:42.785000   68490 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 05:13:42.902787   68490 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 05:13:42.903345   68490 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 05:13:42.924950   68490 out.go:204]   - Booting up control plane ...
	I0115 05:13:42.925057   68490 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 05:13:42.925162   68490 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 05:13:42.925266   68490 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 05:13:42.925363   68490 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 05:13:42.925533   68490 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 05:14:22.911004   68490 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0115 05:14:22.912142   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:14:22.912367   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:14:27.913195   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:14:27.913422   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:14:37.914806   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:14:37.915016   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:14:57.915231   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:14:57.915546   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:15:37.915365   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:15:37.915595   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:15:37.915620   68490 kubeadm.go:322] 
	I0115 05:15:37.915666   68490 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0115 05:15:37.915709   68490 kubeadm.go:322] 		timed out waiting for the condition
	I0115 05:15:37.915717   68490 kubeadm.go:322] 
	I0115 05:15:37.915766   68490 kubeadm.go:322] 	This error is likely caused by:
	I0115 05:15:37.915815   68490 kubeadm.go:322] 		- The kubelet is not running
	I0115 05:15:37.915937   68490 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0115 05:15:37.915947   68490 kubeadm.go:322] 
	I0115 05:15:37.916079   68490 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0115 05:15:37.916135   68490 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0115 05:15:37.916187   68490 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0115 05:15:37.916198   68490 kubeadm.go:322] 
	I0115 05:15:37.916373   68490 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0115 05:15:37.916485   68490 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0115 05:15:37.916498   68490 kubeadm.go:322] 
	I0115 05:15:37.916608   68490 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0115 05:15:37.916681   68490 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0115 05:15:37.916763   68490 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0115 05:15:37.916794   68490 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0115 05:15:37.916805   68490 kubeadm.go:322] 
	I0115 05:15:37.917977   68490 kubeadm.go:322] W0115 13:13:40.398594    1693 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0115 05:15:37.918133   68490 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0115 05:15:37.918224   68490 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0115 05:15:37.918370   68490 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0115 05:15:37.918474   68490 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 05:15:37.918582   68490 kubeadm.go:322] W0115 13:13:42.907987    1693 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 05:15:37.918686   68490 kubeadm.go:322] W0115 13:13:42.908829    1693 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 05:15:37.918752   68490 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0115 05:15:37.918823   68490 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0115 05:15:37.918923   68490 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-482000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-482000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:13:40.398594    1693 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:13:42.907987    1693 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:13:42.908829    1693 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-482000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-482000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:13:40.398594    1693 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:13:42.907987    1693 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:13:42.908829    1693 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0115 05:15:37.918958   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0115 05:15:38.324392   68490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 05:15:38.334882   68490 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 05:15:38.334946   68490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 05:15:38.343146   68490 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 05:15:38.343180   68490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 05:15:38.394995   68490 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0115 05:15:38.395049   68490 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 05:15:38.623298   68490 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 05:15:38.623396   68490 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 05:15:38.623488   68490 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 05:15:38.794051   68490 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 05:15:38.794621   68490 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 05:15:38.794662   68490 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 05:15:38.866070   68490 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 05:15:38.888089   68490 out.go:204]   - Generating certificates and keys ...
	I0115 05:15:38.888163   68490 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 05:15:38.888244   68490 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 05:15:38.888327   68490 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0115 05:15:38.888385   68490 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0115 05:15:38.888441   68490 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0115 05:15:38.888482   68490 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0115 05:15:38.888543   68490 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0115 05:15:38.888610   68490 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0115 05:15:38.888715   68490 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0115 05:15:38.888782   68490 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0115 05:15:38.888811   68490 kubeadm.go:322] [certs] Using the existing "sa" key
	I0115 05:15:38.888868   68490 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 05:15:39.070503   68490 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 05:15:39.221395   68490 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 05:15:39.431710   68490 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 05:15:39.554212   68490 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 05:15:39.554837   68490 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 05:15:39.576890   68490 out.go:204]   - Booting up control plane ...
	I0115 05:15:39.577041   68490 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 05:15:39.577175   68490 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 05:15:39.577301   68490 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 05:15:39.577426   68490 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 05:15:39.577687   68490 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 05:16:19.561445   68490 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0115 05:16:19.562612   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:16:19.562788   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:16:24.563577   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:16:24.563785   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:16:34.564459   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:16:34.564681   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:16:54.564402   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:16:54.564623   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:17:34.563880   68490 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0115 05:17:34.564112   68490 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0115 05:17:34.564128   68490 kubeadm.go:322] 
	I0115 05:17:34.564193   68490 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0115 05:17:34.564260   68490 kubeadm.go:322] 		timed out waiting for the condition
	I0115 05:17:34.564274   68490 kubeadm.go:322] 
	I0115 05:17:34.564310   68490 kubeadm.go:322] 	This error is likely caused by:
	I0115 05:17:34.564347   68490 kubeadm.go:322] 		- The kubelet is not running
	I0115 05:17:34.564456   68490 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0115 05:17:34.564465   68490 kubeadm.go:322] 
	I0115 05:17:34.564588   68490 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0115 05:17:34.564644   68490 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0115 05:17:34.564686   68490 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0115 05:17:34.564695   68490 kubeadm.go:322] 
	I0115 05:17:34.564854   68490 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0115 05:17:34.564992   68490 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0115 05:17:34.565009   68490 kubeadm.go:322] 
	I0115 05:17:34.565124   68490 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0115 05:17:34.565186   68490 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0115 05:17:34.565293   68490 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0115 05:17:34.565335   68490 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0115 05:17:34.565352   68490 kubeadm.go:322] 
	I0115 05:17:34.566660   68490 kubeadm.go:322] W0115 13:15:38.395198    4738 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0115 05:17:34.566850   68490 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0115 05:17:34.566914   68490 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0115 05:17:34.567022   68490 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0115 05:17:34.567111   68490 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 05:17:34.567211   68490 kubeadm.go:322] W0115 13:15:39.560179    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 05:17:34.567318   68490 kubeadm.go:322] W0115 13:15:39.560851    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 05:17:34.567385   68490 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0115 05:17:34.567447   68490 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0115 05:17:34.567485   68490 kubeadm.go:406] StartCluster complete in 3m54.279969776s
	I0115 05:17:34.567569   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0115 05:17:34.585198   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.585215   68490 logs.go:286] No container was found matching "kube-apiserver"
	I0115 05:17:34.585284   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0115 05:17:34.604638   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.604653   68490 logs.go:286] No container was found matching "etcd"
	I0115 05:17:34.604722   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0115 05:17:34.621896   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.621911   68490 logs.go:286] No container was found matching "coredns"
	I0115 05:17:34.621979   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0115 05:17:34.639827   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.639840   68490 logs.go:286] No container was found matching "kube-scheduler"
	I0115 05:17:34.639905   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0115 05:17:34.658517   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.658531   68490 logs.go:286] No container was found matching "kube-proxy"
	I0115 05:17:34.658598   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0115 05:17:34.677607   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.677621   68490 logs.go:286] No container was found matching "kube-controller-manager"
	I0115 05:17:34.677689   68490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0115 05:17:34.696195   68490 logs.go:284] 0 containers: []
	W0115 05:17:34.696211   68490 logs.go:286] No container was found matching "kindnet"
	I0115 05:17:34.696219   68490 logs.go:123] Gathering logs for kubelet ...
	I0115 05:17:34.696227   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 05:17:34.731785   68490 logs.go:123] Gathering logs for dmesg ...
	I0115 05:17:34.731805   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 05:17:34.744197   68490 logs.go:123] Gathering logs for describe nodes ...
	I0115 05:17:34.744211   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0115 05:17:34.817098   68490 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0115 05:17:34.817117   68490 logs.go:123] Gathering logs for Docker ...
	I0115 05:17:34.817124   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0115 05:17:34.832330   68490 logs.go:123] Gathering logs for container status ...
	I0115 05:17:34.832346   68490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0115 05:17:34.908173   68490 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:15:38.395198    4738 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:15:39.560179    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:15:39.560851    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0115 05:17:34.908196   68490 out.go:239] * 
	* 
	W0115 05:17:34.908241   68490 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:15:38.395198    4738 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:15:39.560179    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:15:39.560851    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:15:38.395198    4738 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:15:39.560179    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:15:39.560851    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0115 05:17:34.908256   68490 out.go:239] * 
	* 
	W0115 05:17:34.908912   68490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 05:17:34.993714   68490 out.go:177] 
	W0115 05:17:35.036202   68490 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:15:38.395198    4738 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:15:39.560179    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:15:39.560851    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0115 13:15:38.395198    4738 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0115 13:15:39.560179    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0115 13:15:39.560851    4738 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0115 05:17:35.036273   68490 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0115 05:17:35.036308   68490 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0115 05:17:35.057739   68490 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-482000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (257.71s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (107.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-482000 addons enable ingress --alsologtostderr -v=5
E0115 05:18:40.598253   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-482000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m47.325531837s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:17:35.216953   68643 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:17:35.218286   68643 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:17:35.218294   68643 out.go:309] Setting ErrFile to fd 2...
	I0115 05:17:35.218300   68643 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:17:35.218502   68643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:17:35.218880   68643 mustload.go:65] Loading cluster: ingress-addon-legacy-482000
	I0115 05:17:35.219269   68643 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:17:35.219287   68643 addons.go:597] checking whether the cluster is paused
	I0115 05:17:35.219445   68643 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:17:35.219483   68643 host.go:66] Checking if "ingress-addon-legacy-482000" exists ...
	I0115 05:17:35.220098   68643 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:17:35.276243   68643 ssh_runner.go:195] Run: systemctl --version
	I0115 05:17:35.276339   68643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:17:35.327798   68643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:17:35.418264   68643 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0115 05:17:35.457989   68643 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0115 05:17:35.479651   68643 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:17:35.479663   68643 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-482000"
	I0115 05:17:35.479669   68643 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-482000"
	I0115 05:17:35.479697   68643 host.go:66] Checking if "ingress-addon-legacy-482000" exists ...
	I0115 05:17:35.479996   68643 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:17:35.550525   68643 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0115 05:17:35.571381   68643 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0115 05:17:35.592457   68643 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0115 05:17:35.615622   68643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0115 05:17:35.636684   68643 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 05:17:35.636707   68643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0115 05:17:35.636833   68643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:17:35.687616   68643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:17:35.786627   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:35.833782   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:35.833815   68643 retry.go:31] will retry after 371.564893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:36.205983   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:36.254710   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:36.254739   68643 retry.go:31] will retry after 494.271308ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:36.749475   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:36.804486   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:36.804506   68643 retry.go:31] will retry after 340.184796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:37.145073   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:37.196283   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:37.196304   68643 retry.go:31] will retry after 707.835729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:37.904981   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:37.954333   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:37.954358   68643 retry.go:31] will retry after 642.095497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:38.596784   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:38.660136   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:38.660155   68643 retry.go:31] will retry after 1.599241496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:40.261486   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:40.313181   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:40.313205   68643 retry.go:31] will retry after 2.3846559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:42.697901   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:42.768454   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:42.768471   68643 retry.go:31] will retry after 2.195307932s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:44.963784   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:45.013701   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:45.013719   68643 retry.go:31] will retry after 9.242354262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:54.256702   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:54.317957   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:54.317974   68643 retry.go:31] will retry after 5.291386123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:59.609387   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:17:59.661077   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:17:59.661094   68643 retry.go:31] will retry after 15.578437605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:18:15.239312   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:18:15.291316   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:18:15.291333   68643 retry.go:31] will retry after 18.710892223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:18:34.002674   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:18:34.053895   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:18:34.053919   68643 retry.go:31] will retry after 48.182365264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:22.233915   68643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0115 05:19:22.363608   68643 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:22.363633   68643 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-482000"
	I0115 05:19:22.385215   68643 out.go:177] * Verifying ingress addon...
	I0115 05:19:22.407281   68643 out.go:177] 
	W0115 05:19:22.428762   68643 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-482000" does not exist: client config: context "ingress-addon-legacy-482000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-482000" does not exist: client config: context "ingress-addon-legacy-482000" does not exist]
	W0115 05:19:22.428779   68643 out.go:239] * 
	* 
	W0115 05:19:22.435495   68643 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 05:19:22.456940   68643 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-482000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-482000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5",
	        "Created": "2024-01-15T13:13:25.527112214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T13:13:25.739041481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/hosts",
	        "LogPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5-json.log",
	        "Name": "/ingress-addon-legacy-482000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-482000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-482000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63-init/diff:/var/lib/docker/overlay2/b3cb78fe399645181979e767fd2b27916778197e6245b2db21b3eb1fe7dda1f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-482000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-482000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-482000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-482000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-482000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c52d563e1da90ce414962fa1df8d9f1f7b3e7e151030bd1bd475447b08ef956f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54699"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54695"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54696"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54697"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54698"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c52d563e1da9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-482000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "01e6adecdd3d",
	                        "ingress-addon-legacy-482000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "1736f1945b5c282c51c1427aefc8d7c5b95fb9883f9d2b3660576aa7e5f3df5e",
	                    "EndpointID": "d7fd6caa78b65da813d9019e0f38370b5eed09e2397f731aa620e12eae53c3e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-482000 -n ingress-addon-legacy-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-482000 -n ingress-addon-legacy-482000: exit status 6 (377.659416ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:19:22.901673   68667 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-482000" does not appear in /Users/jenkins/minikube-integration/17953-64881/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-482000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (107.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (84.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-482000 addons enable ingress-dns --alsologtostderr -v=5
E0115 05:19:54.001612   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-482000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m23.932628634s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:19:22.968272   68677 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:19:22.969401   68677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:19:22.969409   68677 out.go:309] Setting ErrFile to fd 2...
	I0115 05:19:22.969413   68677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:19:22.969610   68677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:19:22.969981   68677 mustload.go:65] Loading cluster: ingress-addon-legacy-482000
	I0115 05:19:22.970263   68677 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:19:22.970279   68677 addons.go:597] checking whether the cluster is paused
	I0115 05:19:22.970362   68677 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:19:22.970380   68677 host.go:66] Checking if "ingress-addon-legacy-482000" exists ...
	I0115 05:19:22.970778   68677 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:19:23.021494   68677 ssh_runner.go:195] Run: systemctl --version
	I0115 05:19:23.021585   68677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:19:23.071637   68677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:19:23.162452   68677 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0115 05:19:23.202642   68677 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0115 05:19:23.223615   68677 config.go:182] Loaded profile config "ingress-addon-legacy-482000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0115 05:19:23.223633   68677 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-482000"
	I0115 05:19:23.223647   68677 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-482000"
	I0115 05:19:23.223705   68677 host.go:66] Checking if "ingress-addon-legacy-482000" exists ...
	I0115 05:19:23.224094   68677 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-482000 --format={{.State.Status}}
	I0115 05:19:23.296529   68677 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0115 05:19:23.317411   68677 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0115 05:19:23.338612   68677 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 05:19:23.338633   68677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0115 05:19:23.338739   68677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-482000
	I0115 05:19:23.389584   68677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54699 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/ingress-addon-legacy-482000/id_rsa Username:docker}
	I0115 05:19:23.491205   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:23.540344   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:23.540372   68677 retry.go:31] will retry after 331.490486ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:23.872896   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:23.920318   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:23.920344   68677 retry.go:31] will retry after 524.620002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:24.447263   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:24.508227   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:24.508245   68677 retry.go:31] will retry after 659.097097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:25.167483   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:25.217306   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:25.217322   68677 retry.go:31] will retry after 639.652196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:25.857319   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:25.914300   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:25.914320   68677 retry.go:31] will retry after 936.529292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:26.851254   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:26.916051   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:26.916069   68677 retry.go:31] will retry after 2.28411149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:29.200174   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:29.251736   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:29.251756   68677 retry.go:31] will retry after 4.129021641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:33.381335   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:33.439314   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:33.439337   68677 retry.go:31] will retry after 3.033003624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:36.474484   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:36.530507   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:36.530526   68677 retry.go:31] will retry after 4.640597695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:41.173060   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:41.222824   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:41.222859   68677 retry.go:31] will retry after 8.920623802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:50.143175   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:19:50.202136   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:19:50.202153   68677 retry.go:31] will retry after 15.064109223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:20:05.266052   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:20:05.322155   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:20:05.322178   68677 retry.go:31] will retry after 23.993574502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:20:29.315544   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:20:29.367880   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:20:29.367897   68677 retry.go:31] will retry after 17.328363265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:20:46.696044   68677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0115 05:20:46.744042   68677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0115 05:20:46.765857   68677 out.go:177] 
	W0115 05:20:46.787935   68677 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0115 05:20:46.787967   68677 out.go:239] * 
	* 
	W0115 05:20:46.795939   68677 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 05:20:46.816885   68677 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-482000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-482000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5",
	        "Created": "2024-01-15T13:13:25.527112214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T13:13:25.739041481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/hosts",
	        "LogPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5-json.log",
	        "Name": "/ingress-addon-legacy-482000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-482000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-482000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63-init/diff:/var/lib/docker/overlay2/b3cb78fe399645181979e767fd2b27916778197e6245b2db21b3eb1fe7dda1f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-482000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-482000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-482000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-482000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-482000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c52d563e1da90ce414962fa1df8d9f1f7b3e7e151030bd1bd475447b08ef956f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54699"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54695"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54696"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54697"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54698"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c52d563e1da9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-482000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "01e6adecdd3d",
	                        "ingress-addon-legacy-482000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "1736f1945b5c282c51c1427aefc8d7c5b95fb9883f9d2b3660576aa7e5f3df5e",
	                    "EndpointID": "d7fd6caa78b65da813d9019e0f38370b5eed09e2397f731aa620e12eae53c3e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-482000 -n ingress-addon-legacy-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-482000 -n ingress-addon-legacy-482000: exit status 6 (384.906033ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:20:47.268429   68691 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-482000" does not appear in /Users/jenkins/minikube-integration/17953-64881/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-482000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (84.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-482000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-482000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5",
	        "Created": "2024-01-15T13:13:25.527112214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T13:13:25.739041481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/hosts",
	        "LogPath": "/var/lib/docker/containers/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5/01e6adecdd3d3aa99a947e08750856ed7335a481446f5627e0beeb852c1fdbd5-json.log",
	        "Name": "/ingress-addon-legacy-482000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-482000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-482000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63-init/diff:/var/lib/docker/overlay2/b3cb78fe399645181979e767fd2b27916778197e6245b2db21b3eb1fe7dda1f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2cb0888d187b97af0912977b10ebd3b19657ae1014924785c29b2fd6cdbb2e63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-482000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-482000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-482000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-482000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-482000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c52d563e1da90ce414962fa1df8d9f1f7b3e7e151030bd1bd475447b08ef956f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54699"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54695"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54696"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54697"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54698"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c52d563e1da9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-482000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "01e6adecdd3d",
	                        "ingress-addon-legacy-482000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "1736f1945b5c282c51c1427aefc8d7c5b95fb9883f9d2b3660576aa7e5f3df5e",
	                    "EndpointID": "d7fd6caa78b65da813d9019e0f38370b5eed09e2397f731aa620e12eae53c3e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-482000 -n ingress-addon-legacy-482000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-482000 -n ingress-addon-legacy-482000: exit status 6 (404.028118ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:20:47.724016   68703 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-482000" does not appear in /Users/jenkins/minikube-integration/17953-64881/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-482000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (755.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-456000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0115 05:25:56.721434   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:26:17.050438   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:29:53.968477   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:30:56.704464   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:32:19.756960   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:34:53.952601   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:35:56.689167   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-456000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.650452605s)

                                                
                                                
-- stdout --
	* [multinode-456000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-456000 in cluster multinode-456000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-456000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:25:04.862422   70438 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:25:04.862722   70438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:25:04.862729   70438 out.go:309] Setting ErrFile to fd 2...
	I0115 05:25:04.862733   70438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:25:04.862925   70438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:25:04.864399   70438 out.go:303] Setting JSON to false
	I0115 05:25:04.887015   70438 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":31847,"bootTime":1705293257,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:25:04.887125   70438 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:25:04.908965   70438 out.go:177] * [multinode-456000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:25:04.952818   70438 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 05:25:04.952890   70438 notify.go:220] Checking for updates...
	I0115 05:25:04.995448   70438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:25:05.017656   70438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:25:05.039700   70438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:25:05.061337   70438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 05:25:05.082534   70438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 05:25:05.104000   70438 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:25:05.160233   70438 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:25:05.160408   70438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:25:05.264285   70438 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-15 13:25:05.255192323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:25:05.306759   70438 out.go:177] * Using the docker driver based on user configuration
	I0115 05:25:05.327943   70438 start.go:298] selected driver: docker
	I0115 05:25:05.327977   70438 start.go:902] validating driver "docker" against <nil>
	I0115 05:25:05.327994   70438 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 05:25:05.332460   70438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:25:05.435888   70438 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-15 13:25:05.426104182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:25:05.436074   70438 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 05:25:05.436280   70438 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 05:25:05.457167   70438 out.go:177] * Using Docker Desktop driver with root privileges
	I0115 05:25:05.478172   70438 cni.go:84] Creating CNI manager for ""
	I0115 05:25:05.478196   70438 cni.go:136] 0 nodes found, recommending kindnet
	I0115 05:25:05.478207   70438 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 05:25:05.478223   70438 start_flags.go:321] config:
	{Name:multinode-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-456000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:25:05.500181   70438 out.go:177] * Starting control plane node multinode-456000 in cluster multinode-456000
	I0115 05:25:05.523082   70438 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:25:05.545280   70438 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:25:05.589097   70438 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:25:05.589181   70438 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0115 05:25:05.589198   70438 cache.go:56] Caching tarball of preloaded images
	I0115 05:25:05.589195   70438 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:25:05.589411   70438 preload.go:174] Found /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 05:25:05.589432   70438 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0115 05:25:05.591029   70438 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/multinode-456000/config.json ...
	I0115 05:25:05.591130   70438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/multinode-456000/config.json: {Name:mk57cdc7a31a0f7c77791552212d8f40b0ba927e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:25:05.641170   70438 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 05:25:05.641205   70438 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 05:25:05.641233   70438 cache.go:194] Successfully downloaded all kic artifacts
	I0115 05:25:05.641281   70438 start.go:365] acquiring machines lock for multinode-456000: {Name:mk3c781fec38dc7197a8eca34d9ca558cb93e4e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 05:25:05.641435   70438 start.go:369] acquired machines lock for "multinode-456000" in 142.071µs
	I0115 05:25:05.641460   70438 start.go:93] Provisioning new machine with config: &{Name:multinode-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-456000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0115 05:25:05.641550   70438 start.go:125] createHost starting for "" (driver="docker")
	I0115 05:25:05.684168   70438 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 05:25:05.684536   70438 start.go:159] libmachine.API.Create for "multinode-456000" (driver="docker")
	I0115 05:25:05.684614   70438 client.go:168] LocalClient.Create starting
	I0115 05:25:05.684830   70438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 05:25:05.684934   70438 main.go:141] libmachine: Decoding PEM data...
	I0115 05:25:05.684967   70438 main.go:141] libmachine: Parsing certificate...
	I0115 05:25:05.685083   70438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 05:25:05.685156   70438 main.go:141] libmachine: Decoding PEM data...
	I0115 05:25:05.685173   70438 main.go:141] libmachine: Parsing certificate...
	I0115 05:25:05.686084   70438 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 05:25:05.737243   70438 cli_runner.go:211] docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 05:25:05.737340   70438 network_create.go:281] running [docker network inspect multinode-456000] to gather additional debugging logs...
	I0115 05:25:05.737360   70438 cli_runner.go:164] Run: docker network inspect multinode-456000
	W0115 05:25:05.787312   70438 cli_runner.go:211] docker network inspect multinode-456000 returned with exit code 1
	I0115 05:25:05.787345   70438 network_create.go:284] error running [docker network inspect multinode-456000]: docker network inspect multinode-456000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-456000 not found
	I0115 05:25:05.787354   70438 network_create.go:286] output of [docker network inspect multinode-456000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-456000 not found
	
	** /stderr **
	I0115 05:25:05.787477   70438 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:25:05.840668   70438 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:25:05.841037   70438 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f52c0}
	I0115 05:25:05.841059   70438 network_create.go:124] attempt to create docker network multinode-456000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0115 05:25:05.841128   70438 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-456000 multinode-456000
	I0115 05:25:05.926740   70438 network_create.go:108] docker network multinode-456000 192.168.58.0/24 created
	I0115 05:25:05.926776   70438 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-456000" container
	I0115 05:25:05.926890   70438 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 05:25:05.979385   70438 cli_runner.go:164] Run: docker volume create multinode-456000 --label name.minikube.sigs.k8s.io=multinode-456000 --label created_by.minikube.sigs.k8s.io=true
	I0115 05:25:06.031321   70438 oci.go:103] Successfully created a docker volume multinode-456000
	I0115 05:25:06.031460   70438 cli_runner.go:164] Run: docker run --rm --name multinode-456000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-456000 --entrypoint /usr/bin/test -v multinode-456000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 05:25:06.425164   70438 oci.go:107] Successfully prepared a docker volume multinode-456000
	I0115 05:25:06.425207   70438 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:25:06.425220   70438 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 05:25:06.425324   70438 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-456000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 05:31:05.665770   70438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:31:05.665960   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:05.720709   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:31:05.720816   70438 retry.go:31] will retry after 264.399242ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:05.985645   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:06.039384   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:31:06.039492   70438 retry.go:31] will retry after 370.350747ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:06.410198   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:06.462858   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:31:06.462967   70438 retry.go:31] will retry after 395.551363ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:06.859036   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:06.914439   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:31:06.914539   70438 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:31:06.914561   70438 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:06.914616   70438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:31:06.914687   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:06.965755   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:31:06.965851   70438 retry.go:31] will retry after 223.288616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:07.190000   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:07.242041   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:31:07.242145   70438 retry.go:31] will retry after 478.925606ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:07.721644   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:07.774627   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:31:07.774724   70438 retry.go:31] will retry after 618.794964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:08.394000   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:31:08.447953   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:31:08.448049   70438 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:31:08.448069   70438 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:08.448100   70438 start.go:128] duration metric: createHost completed in 6m2.826465747s
	I0115 05:31:08.448107   70438 start.go:83] releasing machines lock for "multinode-456000", held for 6m2.826591801s
	W0115 05:31:08.448119   70438 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I0115 05:31:08.448530   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:08.498172   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:08.498221   70438 delete.go:82] Unable to get host status for multinode-456000, assuming it has already been deleted: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	W0115 05:31:08.498310   70438 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0115 05:31:08.498322   70438 start.go:709] Will try again in 5 seconds ...
	I0115 05:31:13.498704   70438 start.go:365] acquiring machines lock for multinode-456000: {Name:mk3c781fec38dc7197a8eca34d9ca558cb93e4e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 05:31:13.498858   70438 start.go:369] acquired machines lock for "multinode-456000" in 114.592µs
	I0115 05:31:13.498882   70438 start.go:96] Skipping create...Using existing machine configuration
	I0115 05:31:13.498893   70438 fix.go:54] fixHost starting: 
	I0115 05:31:13.499201   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:13.553472   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:13.553521   70438 fix.go:102] recreateIfNeeded on multinode-456000: state= err=unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:13.553540   70438 fix.go:107] machineExists: false. err=machine does not exist
	I0115 05:31:13.575183   70438 out.go:177] * docker "multinode-456000" container is missing, will recreate.
	I0115 05:31:13.617921   70438 delete.go:124] DEMOLISHING multinode-456000 ...
	I0115 05:31:13.618159   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:13.669195   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:31:13.669237   70438 stop.go:75] unable to get state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:13.669259   70438 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:13.669629   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:13.720322   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:13.720375   70438 delete.go:82] Unable to get host status for multinode-456000, assuming it has already been deleted: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:13.720457   70438 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:31:13.770600   70438 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:31:13.770634   70438 kic.go:371] could not find the container multinode-456000 to remove it. will try anyways
	I0115 05:31:13.770709   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:13.820032   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:31:13.820074   70438 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:13.820154   70438 cli_runner.go:164] Run: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0"
	W0115 05:31:13.869966   70438 cli_runner.go:211] docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0115 05:31:13.869997   70438 oci.go:650] error shutdown multinode-456000: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:14.870733   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:14.924523   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:14.924565   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:14.924601   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:14.924625   70438 retry.go:31] will retry after 446.845126ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:15.373667   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:15.428268   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:15.428313   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:15.428325   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:15.428350   70438 retry.go:31] will retry after 668.076439ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:16.096655   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:16.149113   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:16.149163   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:16.149172   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:16.149206   70438 retry.go:31] will retry after 821.319547ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:16.972822   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:17.026821   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:17.026867   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:17.026877   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:17.026900   70438 retry.go:31] will retry after 1.332225664s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:18.359505   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:18.413551   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:18.413599   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:18.413608   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:18.413632   70438 retry.go:31] will retry after 2.94320096s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:21.356969   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:21.409372   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:21.409417   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:21.409425   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:21.409451   70438 retry.go:31] will retry after 3.86325067s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:25.274751   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:25.329020   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:25.329062   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:25.329071   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:25.329096   70438 retry.go:31] will retry after 6.995576204s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:32.324862   70438 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:31:32.376918   70438 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:31:32.376967   70438 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:31:32.376977   70438 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:31:32.377003   70438 oci.go:88] couldn't shut down multinode-456000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	 
	I0115 05:31:32.377089   70438 cli_runner.go:164] Run: docker rm -f -v multinode-456000
	I0115 05:31:32.427821   70438 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:31:32.478374   70438 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:31:32.478486   70438 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:31:32.528951   70438 cli_runner.go:164] Run: docker network rm multinode-456000
	I0115 05:31:32.630980   70438 fix.go:114] Sleeping 1 second for extra luck!
	I0115 05:31:33.633089   70438 start.go:125] createHost starting for "" (driver="docker")
	I0115 05:31:33.655180   70438 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 05:31:33.655346   70438 start.go:159] libmachine.API.Create for "multinode-456000" (driver="docker")
	I0115 05:31:33.655388   70438 client.go:168] LocalClient.Create starting
	I0115 05:31:33.655586   70438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 05:31:33.655673   70438 main.go:141] libmachine: Decoding PEM data...
	I0115 05:31:33.655705   70438 main.go:141] libmachine: Parsing certificate...
	I0115 05:31:33.655786   70438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 05:31:33.655857   70438 main.go:141] libmachine: Decoding PEM data...
	I0115 05:31:33.655880   70438 main.go:141] libmachine: Parsing certificate...
	I0115 05:31:33.656566   70438 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 05:31:33.710275   70438 cli_runner.go:211] docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 05:31:33.710364   70438 network_create.go:281] running [docker network inspect multinode-456000] to gather additional debugging logs...
	I0115 05:31:33.710384   70438 cli_runner.go:164] Run: docker network inspect multinode-456000
	W0115 05:31:33.761829   70438 cli_runner.go:211] docker network inspect multinode-456000 returned with exit code 1
	I0115 05:31:33.761859   70438 network_create.go:284] error running [docker network inspect multinode-456000]: docker network inspect multinode-456000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-456000 not found
	I0115 05:31:33.761870   70438 network_create.go:286] output of [docker network inspect multinode-456000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-456000 not found
	
	** /stderr **
	I0115 05:31:33.761994   70438 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:31:33.814183   70438 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:31:33.815779   70438 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:31:33.816138   70438 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00052b3e0}
	I0115 05:31:33.816156   70438 network_create.go:124] attempt to create docker network multinode-456000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0115 05:31:33.816229   70438 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-456000 multinode-456000
	I0115 05:31:33.901372   70438 network_create.go:108] docker network multinode-456000 192.168.67.0/24 created
	I0115 05:31:33.901420   70438 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-456000" container
	I0115 05:31:33.901550   70438 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 05:31:33.952806   70438 cli_runner.go:164] Run: docker volume create multinode-456000 --label name.minikube.sigs.k8s.io=multinode-456000 --label created_by.minikube.sigs.k8s.io=true
	I0115 05:31:34.002483   70438 oci.go:103] Successfully created a docker volume multinode-456000
	I0115 05:31:34.002606   70438 cli_runner.go:164] Run: docker run --rm --name multinode-456000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-456000 --entrypoint /usr/bin/test -v multinode-456000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 05:31:34.319725   70438 oci.go:107] Successfully prepared a docker volume multinode-456000
	I0115 05:31:34.319755   70438 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:31:34.319768   70438 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 05:31:34.319875   70438 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-456000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 05:37:33.636104   70438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:37:33.636238   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:33.689809   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:33.689925   70438 retry.go:31] will retry after 276.577026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:33.967069   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:34.020049   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:34.020165   70438 retry.go:31] will retry after 311.080493ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:34.331563   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:34.385902   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:34.386021   70438 retry.go:31] will retry after 342.845753ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:34.729751   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:34.784207   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:34.784308   70438 retry.go:31] will retry after 841.433136ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:35.626978   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:35.681582   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:37:35.681685   70438 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:37:35.681703   70438 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:35.681766   70438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:37:35.681825   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:35.733991   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:35.734089   70438 retry.go:31] will retry after 161.514157ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:35.896075   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:35.949065   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:35.949161   70438 retry.go:31] will retry after 383.533344ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:36.335016   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:36.388751   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:36.388875   70438 retry.go:31] will retry after 808.205182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:37.197737   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:37.250808   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:37:37.250910   70438 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:37:37.250924   70438 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:37.250938   70438 start.go:128] duration metric: createHost completed in 6m3.637759302s
	I0115 05:37:37.251003   70438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:37:37.251058   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:37.301242   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:37.301336   70438 retry.go:31] will retry after 247.527655ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:37.551131   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:37.604195   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:37.604307   70438 retry.go:31] will retry after 397.116814ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:38.003419   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:38.056179   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:38.056271   70438 retry.go:31] will retry after 523.876776ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:38.581785   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:38.635802   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:37:38.635902   70438 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:37:38.635931   70438 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:38.635991   70438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:37:38.636042   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:38.686221   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:38.686311   70438 retry.go:31] will retry after 373.34137ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:39.061958   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:39.116561   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:39.116654   70438 retry.go:31] will retry after 364.736541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:39.482897   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:39.536646   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:37:39.536746   70438 retry.go:31] will retry after 671.145659ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:40.209229   70438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:37:40.260401   70438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:37:40.260507   70438 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:37:40.260523   70438 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:37:40.260533   70438 fix.go:56] fixHost completed within 6m26.782884023s
	I0115 05:37:40.260541   70438 start.go:83] releasing machines lock for "multinode-456000", held for 6m26.782915755s
	W0115 05:37:40.260631   70438 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-456000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-456000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0115 05:37:40.304134   70438 out.go:177] 
	W0115 05:37:40.326020   70438 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0115 05:37:40.326091   70438 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0115 05:37:40.326188   70438 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0115 05:37:40.370285   70438 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-456000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.634655ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:37:40.588695   70653 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (755.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (84.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (95.563053ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-456000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- rollout status deployment/busybox: exit status 1 (96.713019ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.851974ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.882393ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.053963ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.600171ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.870076ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.571265ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.809055ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.954869ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.036828ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.845209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (95.723675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- exec  -- nslookup kubernetes.io: exit status 1 (96.35862ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- exec  -- nslookup kubernetes.default: exit status 1 (96.205738ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (95.29604ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (110.224138ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:04.726880   70715 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (84.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-456000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (93.522737ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-456000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.497265ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:04.984884   70724 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-456000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-456000 -v 3 --alsologtostderr: exit status 80 (207.010698ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:39:05.041682   70728 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:39:05.042910   70728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:05.042917   70728 out.go:309] Setting ErrFile to fd 2...
	I0115 05:39:05.042921   70728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:05.043105   70728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:39:05.043436   70728 mustload.go:65] Loading cluster: multinode-456000
	I0115 05:39:05.043722   70728 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:39:05.044109   70728 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:05.093656   70728 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:05.117920   70728 out.go:177] 
	W0115 05:39:05.139844   70728 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:39:05.139873   70728 out.go:239] * 
	* 
	W0115 05:39:05.148081   70728 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 05:39:05.169781   70728 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-456000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.59049ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:05.356220   70734 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-456000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-456000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (35.473625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-456000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-456000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-456000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.291316ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:05.555661   70741 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-456000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-456000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-456000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,
\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-456000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.857403ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:05.901928   70753 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status --output json --alsologtostderr: exit status 7 (109.256216ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-456000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:39:05.959169   70757 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:39:05.959482   70757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:05.959488   70757 out.go:309] Setting ErrFile to fd 2...
	I0115 05:39:05.959492   70757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:05.959673   70757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:39:05.959856   70757 out.go:303] Setting JSON to true
	I0115 05:39:05.959895   70757 mustload.go:65] Loading cluster: multinode-456000
	I0115 05:39:05.959923   70757 notify.go:220] Checking for updates...
	I0115 05:39:05.960201   70757 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:39:05.960212   70757 status.go:255] checking status of multinode-456000 ...
	I0115 05:39:05.960650   70757 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:06.011255   70757 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:06.011309   70757 status.go:330] multinode-456000 host status = "" (err=state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	)
	I0115 05:39:06.011332   70757 status.go:257] multinode-456000 status: &{Name:multinode-456000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 05:39:06.011349   70757 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:39:06.011356   70757 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-456000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (110.759911ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:06.176492   70763 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 node stop m03: exit status 85 (155.08459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-456000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status: exit status 7 (110.571664ms)

                                                
                                                
-- stdout --
	multinode-456000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:06.442811   70769 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:39:06.442821   70769 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr: exit status 7 (108.781532ms)

                                                
                                                
-- stdout --
	multinode-456000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:39:06.500296   70773 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:39:06.500523   70773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:06.500528   70773 out.go:309] Setting ErrFile to fd 2...
	I0115 05:39:06.500533   70773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:06.500722   70773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:39:06.500903   70773 out.go:303] Setting JSON to false
	I0115 05:39:06.500925   70773 mustload.go:65] Loading cluster: multinode-456000
	I0115 05:39:06.500964   70773 notify.go:220] Checking for updates...
	I0115 05:39:06.501222   70773 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:39:06.501232   70773 status.go:255] checking status of multinode-456000 ...
	I0115 05:39:06.501623   70773 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:06.551648   70773 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:06.551695   70773 status.go:330] multinode-456000 host status = "" (err=state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	)
	I0115 05:39:06.551715   70773 status.go:257] multinode-456000 status: &{Name:multinode-456000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 05:39:06.551733   70773 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:39:06.551741   70773 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr": multinode-456000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:261: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr": multinode-456000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:265: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr": multinode-456000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (107.883406ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:06.714293   70779 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 node start m03 --alsologtostderr: exit status 85 (155.159819ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:39:06.827831   70785 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:39:06.828748   70785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:06.828753   70785 out.go:309] Setting ErrFile to fd 2...
	I0115 05:39:06.828757   70785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:06.828960   70785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:39:06.829294   70785 mustload.go:65] Loading cluster: multinode-456000
	I0115 05:39:06.829577   70785 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:39:06.851462   70785 out.go:177] 
	W0115 05:39:06.872476   70785 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0115 05:39:06.872505   70785 out.go:239] * 
	* 
	W0115 05:39:06.880705   70785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 05:39:06.902569   70785 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0115 05:39:06.827831   70785 out.go:296] Setting OutFile to fd 1 ...
I0115 05:39:06.828748   70785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:39:06.828753   70785 out.go:309] Setting ErrFile to fd 2...
I0115 05:39:06.828757   70785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:39:06.828960   70785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
I0115 05:39:06.829294   70785 mustload.go:65] Loading cluster: multinode-456000
I0115 05:39:06.829577   70785 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:39:06.851462   70785 out.go:177] 
W0115 05:39:06.872476   70785 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0115 05:39:06.872505   70785 out.go:239] * 
* 
W0115 05:39:06.880705   70785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0115 05:39:06.902569   70785 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-456000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status: exit status 7 (109.21175ms)

                                                
                                                
-- stdout --
	multinode-456000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:07.034552   70787 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:39:07.034563   70787 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-456000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "e5bb0cc2be31ba8ad0bf13b88e9218d75aadde41d113cc24fb4fdaded1fe001b",
	        "Created": "2024-01-15T13:31:33.863554312Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.453247ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:39:07.199098   70793 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (787.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-456000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-456000
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-456000: exit status 82 (12.858540882s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-456000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-456000" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-456000 --wait=true -v=8 --alsologtostderr
E0115 05:39:53.935528   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:40:56.671785   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:42:57.087719   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:44:54.011354   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:45:56.747658   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:48:59.793885   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:49:53.995936   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:50:56.730530   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-456000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m54.526982459s)

                                                
                                                
-- stdout --
	* [multinode-456000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-456000 in cluster multinode-456000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* docker "multinode-456000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-456000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:39:20.172407   70816 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:39:20.172698   70816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:20.172705   70816 out.go:309] Setting ErrFile to fd 2...
	I0115 05:39:20.172709   70816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:39:20.172891   70816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:39:20.174378   70816 out.go:303] Setting JSON to false
	I0115 05:39:20.196500   70816 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":32703,"bootTime":1705293257,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:39:20.196597   70816 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:39:20.218977   70816 out.go:177] * [multinode-456000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:39:20.262840   70816 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 05:39:20.262946   70816 notify.go:220] Checking for updates...
	I0115 05:39:20.306443   70816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:39:20.327596   70816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:39:20.370386   70816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:39:20.391785   70816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 05:39:20.413768   70816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 05:39:20.436109   70816 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:39:20.436278   70816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:39:20.492844   70816 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:39:20.493009   70816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:39:20.594702   70816 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:83 SystemTime:2024-01-15 13:39:20.585442447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:39:20.637449   70816 out.go:177] * Using the docker driver based on existing profile
	I0115 05:39:20.658455   70816 start.go:298] selected driver: docker
	I0115 05:39:20.658526   70816 start.go:902] validating driver "docker" against &{Name:multinode-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-456000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:39:20.658646   70816 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 05:39:20.658851   70816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:39:20.761323   70816 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:83 SystemTime:2024-01-15 13:39:20.752337162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:39:20.764441   70816 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 05:39:20.764515   70816 cni.go:84] Creating CNI manager for ""
	I0115 05:39:20.764524   70816 cni.go:136] 1 nodes found, recommending kindnet
	I0115 05:39:20.764533   70816 start_flags.go:321] config:
	{Name:multinode-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-456000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:39:20.785957   70816 out.go:177] * Starting control plane node multinode-456000 in cluster multinode-456000
	I0115 05:39:20.807036   70816 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:39:20.850936   70816 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:39:20.872850   70816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:39:20.872902   70816 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:39:20.872931   70816 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0115 05:39:20.872951   70816 cache.go:56] Caching tarball of preloaded images
	I0115 05:39:20.873154   70816 preload.go:174] Found /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 05:39:20.873173   70816 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0115 05:39:20.873361   70816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/multinode-456000/config.json ...
	I0115 05:39:20.925220   70816 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 05:39:20.925290   70816 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 05:39:20.925321   70816 cache.go:194] Successfully downloaded all kic artifacts
	I0115 05:39:20.925361   70816 start.go:365] acquiring machines lock for multinode-456000: {Name:mk3c781fec38dc7197a8eca34d9ca558cb93e4e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 05:39:20.925447   70816 start.go:369] acquired machines lock for "multinode-456000" in 68.604µs
	I0115 05:39:20.925467   70816 start.go:96] Skipping create...Using existing machine configuration
	I0115 05:39:20.925476   70816 fix.go:54] fixHost starting: 
	I0115 05:39:20.925712   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:20.975616   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:20.975663   70816 fix.go:102] recreateIfNeeded on multinode-456000: state= err=unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:20.975685   70816 fix.go:107] machineExists: false. err=machine does not exist
	I0115 05:39:20.997510   70816 out.go:177] * docker "multinode-456000" container is missing, will recreate.
	I0115 05:39:21.041260   70816 delete.go:124] DEMOLISHING multinode-456000 ...
	I0115 05:39:21.041424   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:21.092952   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:39:21.092999   70816 stop.go:75] unable to get state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:21.093018   70816 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:21.093401   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:21.144067   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:21.144124   70816 delete.go:82] Unable to get host status for multinode-456000, assuming it has already been deleted: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:21.144206   70816 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:39:21.194668   70816 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:39:21.194707   70816 kic.go:371] could not find the container multinode-456000 to remove it. will try anyways
	I0115 05:39:21.194781   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:21.244804   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:39:21.244849   70816 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:21.244930   70816 cli_runner.go:164] Run: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0"
	W0115 05:39:21.295608   70816 cli_runner.go:211] docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0115 05:39:21.295642   70816 oci.go:650] error shutdown multinode-456000: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:22.296744   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:22.349449   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:22.349505   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:22.349519   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:22.349555   70816 retry.go:31] will retry after 566.252015ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:22.917441   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:22.971129   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:22.971181   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:22.971217   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:22.971238   70816 retry.go:31] will retry after 500.832938ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:23.473400   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:23.525253   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:23.525300   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:23.525312   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:23.525339   70816 retry.go:31] will retry after 873.949848ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:24.400089   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:24.452483   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:24.452543   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:24.452554   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:24.452576   70816 retry.go:31] will retry after 1.673631963s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:26.127109   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:26.178608   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:26.178655   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:26.178665   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:26.178689   70816 retry.go:31] will retry after 3.329562781s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:29.508790   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:29.606666   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:29.606722   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:29.606748   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:29.606772   70816 retry.go:31] will retry after 2.457316192s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:32.064298   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:32.118267   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:32.118316   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:32.118325   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:32.118348   70816 retry.go:31] will retry after 7.03438726s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:39.153320   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:39:39.207702   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:39:39.207758   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:39:39.207768   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:39:39.207793   70816 oci.go:88] couldn't shut down multinode-456000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	 
	I0115 05:39:39.207862   70816 cli_runner.go:164] Run: docker rm -f -v multinode-456000
	I0115 05:39:39.258515   70816 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:39:39.308152   70816 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:39:39.308261   70816 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:39:39.359055   70816 cli_runner.go:164] Run: docker network rm multinode-456000
	I0115 05:39:39.456992   70816 fix.go:114] Sleeping 1 second for extra luck!
	I0115 05:39:40.457833   70816 start.go:125] createHost starting for "" (driver="docker")
	I0115 05:39:40.480205   70816 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 05:39:40.480378   70816 start.go:159] libmachine.API.Create for "multinode-456000" (driver="docker")
	I0115 05:39:40.480430   70816 client.go:168] LocalClient.Create starting
	I0115 05:39:40.480618   70816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 05:39:40.480709   70816 main.go:141] libmachine: Decoding PEM data...
	I0115 05:39:40.480745   70816 main.go:141] libmachine: Parsing certificate...
	I0115 05:39:40.480846   70816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 05:39:40.480923   70816 main.go:141] libmachine: Decoding PEM data...
	I0115 05:39:40.480939   70816 main.go:141] libmachine: Parsing certificate...
	I0115 05:39:40.481641   70816 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 05:39:40.533458   70816 cli_runner.go:211] docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 05:39:40.533542   70816 network_create.go:281] running [docker network inspect multinode-456000] to gather additional debugging logs...
	I0115 05:39:40.533562   70816 cli_runner.go:164] Run: docker network inspect multinode-456000
	W0115 05:39:40.583888   70816 cli_runner.go:211] docker network inspect multinode-456000 returned with exit code 1
	I0115 05:39:40.583917   70816 network_create.go:284] error running [docker network inspect multinode-456000]: docker network inspect multinode-456000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-456000 not found
	I0115 05:39:40.583949   70816 network_create.go:286] output of [docker network inspect multinode-456000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-456000 not found
	
	** /stderr **
	I0115 05:39:40.584071   70816 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:39:40.635858   70816 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:39:40.636242   70816 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002586060}
	I0115 05:39:40.636258   70816 network_create.go:124] attempt to create docker network multinode-456000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0115 05:39:40.636337   70816 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-456000 multinode-456000
	I0115 05:39:40.721443   70816 network_create.go:108] docker network multinode-456000 192.168.58.0/24 created
	I0115 05:39:40.721481   70816 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-456000" container
	I0115 05:39:40.721594   70816 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 05:39:40.773538   70816 cli_runner.go:164] Run: docker volume create multinode-456000 --label name.minikube.sigs.k8s.io=multinode-456000 --label created_by.minikube.sigs.k8s.io=true
	I0115 05:39:40.823455   70816 oci.go:103] Successfully created a docker volume multinode-456000
	I0115 05:39:40.823585   70816 cli_runner.go:164] Run: docker run --rm --name multinode-456000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-456000 --entrypoint /usr/bin/test -v multinode-456000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 05:39:41.123575   70816 oci.go:107] Successfully prepared a docker volume multinode-456000
	I0115 05:39:41.123612   70816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:39:41.123627   70816 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 05:39:41.123724   70816 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-456000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 05:45:40.553941   70816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:45:40.554075   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:40.607346   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:40.607466   70816 retry.go:31] will retry after 262.826552ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:40.870768   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:40.922759   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:40.922878   70816 retry.go:31] will retry after 399.10528ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:41.322766   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:41.377227   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:41.377327   70816 retry.go:31] will retry after 608.984491ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:41.987719   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:42.040649   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:45:42.040774   70816 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:45:42.040795   70816 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:42.040857   70816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:45:42.040924   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:42.092509   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:42.092612   70816 retry.go:31] will retry after 165.331702ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:42.258844   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:42.312602   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:42.312702   70816 retry.go:31] will retry after 370.472443ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:42.683542   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:42.737522   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:42.737622   70816 retry.go:31] will retry after 445.525112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:43.185511   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:43.240209   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:45:43.240310   70816 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:45:43.240332   70816 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:43.240347   70816 start.go:128] duration metric: createHost completed in 6m2.71084566s
	I0115 05:45:43.240409   70816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:45:43.240461   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:43.292406   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:43.292491   70816 retry.go:31] will retry after 232.995601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:43.526153   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:43.578202   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:43.578289   70816 retry.go:31] will retry after 479.499551ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:44.058333   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:44.111380   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:44.111471   70816 retry.go:31] will retry after 398.889322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:44.512617   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:44.566908   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:45:44.567015   70816 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:45:44.567030   70816 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:44.567085   70816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:45:44.567140   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:44.678528   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:44.678610   70816 retry.go:31] will retry after 165.756821ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:44.845202   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:44.899520   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:44.899611   70816 retry.go:31] will retry after 401.816217ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:45.302625   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:45.357355   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:45.357455   70816 retry.go:31] will retry after 319.26478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:45.678396   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:45.730187   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:45:45.730280   70816 retry.go:31] will retry after 712.439723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:46.443955   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:45:46.497614   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:45:46.497723   70816 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:45:46.497738   70816 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:46.497751   70816 fix.go:56] fixHost completed within 6m25.501882411s
	I0115 05:45:46.497764   70816 start.go:83] releasing machines lock for "multinode-456000", held for 6m25.501915167s
	W0115 05:45:46.497777   70816 start.go:694] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0115 05:45:46.497845   70816 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0115 05:45:46.497851   70816 start.go:709] Will try again in 5 seconds ...
	I0115 05:45:51.498179   70816 start.go:365] acquiring machines lock for multinode-456000: {Name:mk3c781fec38dc7197a8eca34d9ca558cb93e4e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 05:45:51.498367   70816 start.go:369] acquired machines lock for "multinode-456000" in 150.72µs
	I0115 05:45:51.498409   70816 start.go:96] Skipping create...Using existing machine configuration
	I0115 05:45:51.498418   70816 fix.go:54] fixHost starting: 
	I0115 05:45:51.498867   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:51.550265   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:51.550314   70816 fix.go:102] recreateIfNeeded on multinode-456000: state= err=unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:51.550334   70816 fix.go:107] machineExists: false. err=machine does not exist
	I0115 05:45:51.572238   70816 out.go:177] * docker "multinode-456000" container is missing, will recreate.
	I0115 05:45:51.615784   70816 delete.go:124] DEMOLISHING multinode-456000 ...
	I0115 05:45:51.615985   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:51.667151   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:45:51.667198   70816 stop.go:75] unable to get state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:51.667217   70816 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:51.667585   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:51.717824   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:51.717888   70816 delete.go:82] Unable to get host status for multinode-456000, assuming it has already been deleted: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:51.717973   70816 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:45:51.767515   70816 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:45:51.767543   70816 kic.go:371] could not find the container multinode-456000 to remove it. will try anyways
	I0115 05:45:51.767618   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:51.818992   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:45:51.819036   70816 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:51.819119   70816 cli_runner.go:164] Run: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0"
	W0115 05:45:51.869549   70816 cli_runner.go:211] docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0115 05:45:51.869578   70816 oci.go:650] error shutdown multinode-456000: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:52.870649   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:52.923781   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:52.923825   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:52.923838   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:45:52.923872   70816 retry.go:31] will retry after 397.715553ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:53.321838   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:53.375339   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:53.375382   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:53.375395   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:45:53.375422   70816 retry.go:31] will retry after 789.423035ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:54.165116   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:54.218453   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:54.218500   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:54.218512   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:45:54.218538   70816 retry.go:31] will retry after 1.369482303s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:55.590293   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:55.643636   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:55.643677   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:55.643699   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:45:55.643723   70816 retry.go:31] will retry after 1.191475319s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:56.837071   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:56.892508   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:56.892557   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:56.892568   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:45:56.892589   70816 retry.go:31] will retry after 2.874119905s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:59.768396   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:45:59.820611   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:45:59.820658   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:45:59.820667   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:45:59.820696   70816 retry.go:31] will retry after 2.046631515s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:46:01.867632   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:46:01.921160   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:46:01.921205   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:46:01.921214   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:46:01.921238   70816 retry.go:31] will retry after 4.902148527s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:46:06.824157   70816 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:46:06.877973   70816 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:46:06.878016   70816 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:46:06.878027   70816 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:46:06.878058   70816 oci.go:88] couldn't shut down multinode-456000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	 
	I0115 05:46:06.878143   70816 cli_runner.go:164] Run: docker rm -f -v multinode-456000
	I0115 05:46:06.929140   70816 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:46:06.979527   70816 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:46:06.979636   70816 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:46:07.030205   70816 cli_runner.go:164] Run: docker network rm multinode-456000
	I0115 05:46:07.138826   70816 fix.go:114] Sleeping 1 second for extra luck!
	I0115 05:46:08.140328   70816 start.go:125] createHost starting for "" (driver="docker")
	I0115 05:46:08.163417   70816 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 05:46:08.163581   70816 start.go:159] libmachine.API.Create for "multinode-456000" (driver="docker")
	I0115 05:46:08.163625   70816 client.go:168] LocalClient.Create starting
	I0115 05:46:08.163822   70816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 05:46:08.163929   70816 main.go:141] libmachine: Decoding PEM data...
	I0115 05:46:08.163958   70816 main.go:141] libmachine: Parsing certificate...
	I0115 05:46:08.164039   70816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 05:46:08.164108   70816 main.go:141] libmachine: Decoding PEM data...
	I0115 05:46:08.164124   70816 main.go:141] libmachine: Parsing certificate...
	I0115 05:46:08.164807   70816 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 05:46:08.217421   70816 cli_runner.go:211] docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 05:46:08.217514   70816 network_create.go:281] running [docker network inspect multinode-456000] to gather additional debugging logs...
	I0115 05:46:08.217531   70816 cli_runner.go:164] Run: docker network inspect multinode-456000
	W0115 05:46:08.267984   70816 cli_runner.go:211] docker network inspect multinode-456000 returned with exit code 1
	I0115 05:46:08.268011   70816 network_create.go:284] error running [docker network inspect multinode-456000]: docker network inspect multinode-456000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-456000 not found
	I0115 05:46:08.268026   70816 network_create.go:286] output of [docker network inspect multinode-456000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-456000 not found
	
	** /stderr **
	I0115 05:46:08.268177   70816 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:46:08.320136   70816 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:46:08.321739   70816 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:46:08.322073   70816 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d5bac0}
	I0115 05:46:08.322091   70816 network_create.go:124] attempt to create docker network multinode-456000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0115 05:46:08.322170   70816 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-456000 multinode-456000
	I0115 05:46:08.407556   70816 network_create.go:108] docker network multinode-456000 192.168.67.0/24 created
	I0115 05:46:08.407589   70816 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-456000" container
	I0115 05:46:08.407713   70816 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 05:46:08.458723   70816 cli_runner.go:164] Run: docker volume create multinode-456000 --label name.minikube.sigs.k8s.io=multinode-456000 --label created_by.minikube.sigs.k8s.io=true
	I0115 05:46:08.508647   70816 oci.go:103] Successfully created a docker volume multinode-456000
	I0115 05:46:08.508770   70816 cli_runner.go:164] Run: docker run --rm --name multinode-456000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-456000 --entrypoint /usr/bin/test -v multinode-456000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 05:46:08.803509   70816 oci.go:107] Successfully prepared a docker volume multinode-456000
	I0115 05:46:08.803540   70816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:46:08.803559   70816 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 05:46:08.803669   70816 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-456000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 05:52:08.145399   70816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:52:08.145528   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:08.198388   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:08.198531   70816 retry.go:31] will retry after 292.278081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:08.491794   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:08.543400   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:08.543498   70816 retry.go:31] will retry after 456.023333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:09.000633   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:09.054661   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:09.054773   70816 retry.go:31] will retry after 759.770267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:09.816070   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:09.871312   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:52:09.871421   70816 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:52:09.871438   70816 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:09.871495   70816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:52:09.871550   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:09.920944   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:09.921043   70816 retry.go:31] will retry after 273.554179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:10.196074   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:10.249662   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:10.249765   70816 retry.go:31] will retry after 438.90671ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:10.690098   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:10.742027   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:10.742126   70816 retry.go:31] will retry after 735.585288ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:11.479355   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:11.533543   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:52:11.533648   70816 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:52:11.533670   70816 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:11.533682   70816 start.go:128] duration metric: createHost completed in 6m3.413266275s
	I0115 05:52:11.533750   70816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 05:52:11.533820   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:11.585229   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:11.585344   70816 retry.go:31] will retry after 356.202013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:11.942549   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:11.996348   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:11.996439   70816 retry.go:31] will retry after 341.693744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:12.340471   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:12.392784   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:12.392891   70816 retry.go:31] will retry after 759.740909ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:13.154277   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:13.206371   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:52:13.206471   70816 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:52:13.206486   70816 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:13.206545   70816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 05:52:13.206606   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:13.256868   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:13.256964   70816 retry.go:31] will retry after 245.166444ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:13.504255   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:13.555601   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:13.555692   70816 retry.go:31] will retry after 204.793892ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:13.762823   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:13.815385   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	I0115 05:52:13.815474   70816 retry.go:31] will retry after 647.962403ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:14.465746   70816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000
	W0115 05:52:14.519413   70816 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000 returned with exit code 1
	W0115 05:52:14.519523   70816 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	W0115 05:52:14.519539   70816 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-456000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-456000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:14.519550   70816 fix.go:56] fixHost completed within 6m23.042169872s
	I0115 05:52:14.519557   70816 start.go:83] releasing machines lock for "multinode-456000", held for 6m23.042211021s
	W0115 05:52:14.519636   70816 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-456000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-456000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0115 05:52:14.563003   70816 out.go:177] 
	W0115 05:52:14.586298   70816 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0115 05:52:14.586366   70816 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0115 05:52:14.586449   70816 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0115 05:52:14.629912   70816 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-456000" : exit status 52
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-456000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "8e43f70d1baf1704ac1bc2e5ec291c548ae587ed5b0694e18934c07f566bf186",
	        "Created": "2024-01-15T13:46:08.370563998Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (110.473314ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:52:14.930532   71094 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (787.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 node delete m03: exit status 80 (205.253183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-456000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr: exit status 7 (109.948286ms)

                                                
                                                
-- stdout --
	multinode-456000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:52:15.194470   71102 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:52:15.194953   71102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:52:15.194959   71102 out.go:309] Setting ErrFile to fd 2...
	I0115 05:52:15.194963   71102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:52:15.195160   71102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:52:15.195335   71102 out.go:303] Setting JSON to false
	I0115 05:52:15.195358   71102 mustload.go:65] Loading cluster: multinode-456000
	I0115 05:52:15.195402   71102 notify.go:220] Checking for updates...
	I0115 05:52:15.195623   71102 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:52:15.195634   71102 status.go:255] checking status of multinode-456000 ...
	I0115 05:52:15.196088   71102 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:15.246099   71102 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:15.246162   71102 status.go:330] multinode-456000 host status = "" (err=state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	)
	I0115 05:52:15.246179   71102 status.go:257] multinode-456000 status: &{Name:multinode-456000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 05:52:15.246195   71102 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:52:15.246203   71102 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "8e43f70d1baf1704ac1bc2e5ec291c548ae587ed5b0694e18934c07f566bf186",
	        "Created": "2024-01-15T13:46:08.370563998Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (108.062659ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:52:15.408561   71108 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 stop: exit status 82 (11.882372484s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	* Stopping node "multinode-456000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-456000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-456000 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status: exit status 7 (109.232232ms)

                                                
                                                
-- stdout --
	multinode-456000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:52:27.399867   71127 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:52:27.399879   71127 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr: exit status 7 (109.512416ms)

                                                
                                                
-- stdout --
	multinode-456000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:52:27.457782   71131 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:52:27.458094   71131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:52:27.458100   71131 out.go:309] Setting ErrFile to fd 2...
	I0115 05:52:27.458105   71131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:52:27.458302   71131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:52:27.458501   71131 out.go:303] Setting JSON to false
	I0115 05:52:27.458525   71131 mustload.go:65] Loading cluster: multinode-456000
	I0115 05:52:27.458572   71131 notify.go:220] Checking for updates...
	I0115 05:52:27.458805   71131 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:52:27.458817   71131 status.go:255] checking status of multinode-456000 ...
	I0115 05:52:27.459289   71131 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:27.509381   71131 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:27.509439   71131 status.go:330] multinode-456000 host status = "" (err=state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	)
	I0115 05:52:27.509456   71131 status.go:257] multinode-456000 status: &{Name:multinode-456000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0115 05:52:27.509475   71131 status.go:260] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	E0115 05:52:27.509483   71131 status.go:263] The "multinode-456000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr": multinode-456000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-456000 status --alsologtostderr": multinode-456000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "8e43f70d1baf1704ac1bc2e5ec291c548ae587ed5b0694e18934c07f566bf186",
	        "Created": "2024-01-15T13:46:08.370563998Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (109.058551ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:52:27.673132   71137 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (12.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (157.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-456000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0115 05:54:53.978324   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-456000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m37.083150016s)

                                                
                                                
-- stdout --
	* [multinode-456000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-456000 in cluster multinode-456000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* docker "multinode-456000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:52:27.785961   71143 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:52:27.786182   71143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:52:27.786187   71143 out.go:309] Setting ErrFile to fd 2...
	I0115 05:52:27.786191   71143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:52:27.786362   71143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:52:27.787755   71143 out.go:303] Setting JSON to false
	I0115 05:52:27.809970   71143 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":33490,"bootTime":1705293257,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:52:27.810087   71143 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:52:27.834139   71143 out.go:177] * [multinode-456000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:52:27.854968   71143 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 05:52:27.855084   71143 notify.go:220] Checking for updates...
	I0115 05:52:27.876820   71143 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:52:27.899096   71143 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:52:27.921092   71143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:52:27.941890   71143 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 05:52:27.962996   71143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 05:52:27.984808   71143 config.go:182] Loaded profile config "multinode-456000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:52:27.985574   71143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:52:28.042573   71143 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:52:28.042726   71143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:52:28.141736   71143 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:103 SystemTime:2024-01-15 13:52:28.132333235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:52:28.163215   71143 out.go:177] * Using the docker driver based on existing profile
	I0115 05:52:28.184423   71143 start.go:298] selected driver: docker
	I0115 05:52:28.184456   71143 start.go:902] validating driver "docker" against &{Name:multinode-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-456000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:52:28.184571   71143 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 05:52:28.184769   71143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:52:28.285540   71143 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:103 SystemTime:2024-01-15 13:52:28.27649053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:52:28.288677   71143 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 05:52:28.288714   71143 cni.go:84] Creating CNI manager for ""
	I0115 05:52:28.288722   71143 cni.go:136] 1 nodes found, recommending kindnet
	I0115 05:52:28.288730   71143 start_flags.go:321] config:
	{Name:multinode-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-456000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:52:28.332270   71143 out.go:177] * Starting control plane node multinode-456000 in cluster multinode-456000
	I0115 05:52:28.353349   71143 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:52:28.397346   71143 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:52:28.418402   71143 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:52:28.418491   71143 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0115 05:52:28.418503   71143 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:52:28.418521   71143 cache.go:56] Caching tarball of preloaded images
	I0115 05:52:28.418749   71143 preload.go:174] Found /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 05:52:28.418768   71143 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0115 05:52:28.419820   71143 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/multinode-456000/config.json ...
	I0115 05:52:28.470803   71143 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 05:52:28.470823   71143 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 05:52:28.470843   71143 cache.go:194] Successfully downloaded all kic artifacts
	I0115 05:52:28.470893   71143 start.go:365] acquiring machines lock for multinode-456000: {Name:mk3c781fec38dc7197a8eca34d9ca558cb93e4e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 05:52:28.470982   71143 start.go:369] acquired machines lock for "multinode-456000" in 70.217µs
	I0115 05:52:28.471003   71143 start.go:96] Skipping create...Using existing machine configuration
	I0115 05:52:28.471012   71143 fix.go:54] fixHost starting: 
	I0115 05:52:28.471261   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:28.521350   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:28.521423   71143 fix.go:102] recreateIfNeeded on multinode-456000: state= err=unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:28.521443   71143 fix.go:107] machineExists: false. err=machine does not exist
	I0115 05:52:28.543203   71143 out.go:177] * docker "multinode-456000" container is missing, will recreate.
	I0115 05:52:28.585976   71143 delete.go:124] DEMOLISHING multinode-456000 ...
	I0115 05:52:28.586176   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:28.637014   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:52:28.637059   71143 stop.go:75] unable to get state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:28.637078   71143 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:28.637430   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:28.688281   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:28.688341   71143 delete.go:82] Unable to get host status for multinode-456000, assuming it has already been deleted: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:28.688433   71143 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:52:28.738503   71143 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:52:28.738537   71143 kic.go:371] could not find the container multinode-456000 to remove it. will try anyways
	I0115 05:52:28.738605   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:28.788220   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	W0115 05:52:28.788262   71143 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:28.788354   71143 cli_runner.go:164] Run: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0"
	W0115 05:52:28.838413   71143 cli_runner.go:211] docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0115 05:52:28.838447   71143 oci.go:650] error shutdown multinode-456000: docker exec --privileged -t multinode-456000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:29.840752   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:29.893789   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:29.893842   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:29.893854   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:29.893890   71143 retry.go:31] will retry after 265.799288ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:30.161954   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:30.214875   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:30.214919   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:30.214933   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:30.214958   71143 retry.go:31] will retry after 511.598206ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:30.727341   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:30.781555   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:30.781609   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:30.781636   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:30.781661   71143 retry.go:31] will retry after 816.701047ms: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:31.599272   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:31.653125   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:31.653167   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:31.653176   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:31.653201   71143 retry.go:31] will retry after 1.818521934s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:33.471871   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:33.522021   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:33.522064   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:33.522082   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:33.522107   71143 retry.go:31] will retry after 3.115894511s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:36.638437   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:36.690658   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:36.690708   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:36.690726   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:36.690753   71143 retry.go:31] will retry after 3.435831825s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:40.127387   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:40.181102   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:40.181154   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:40.181164   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:40.181189   71143 retry.go:31] will retry after 4.108056122s: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:44.290423   71143 cli_runner.go:164] Run: docker container inspect multinode-456000 --format={{.State.Status}}
	W0115 05:52:44.346051   71143 cli_runner.go:211] docker container inspect multinode-456000 --format={{.State.Status}} returned with exit code 1
	I0115 05:52:44.346094   71143 oci.go:662] temporary error verifying shutdown: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	I0115 05:52:44.346102   71143 oci.go:664] temporary error: container multinode-456000 status is  but expect it to be exited
	I0115 05:52:44.346132   71143 oci.go:88] couldn't shut down multinode-456000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000
	 
	I0115 05:52:44.346220   71143 cli_runner.go:164] Run: docker rm -f -v multinode-456000
	I0115 05:52:44.397284   71143 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-456000
	W0115 05:52:44.446949   71143 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-456000 returned with exit code 1
	I0115 05:52:44.447055   71143 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:52:44.497435   71143 cli_runner.go:164] Run: docker network rm multinode-456000
	I0115 05:52:44.690114   71143 fix.go:114] Sleeping 1 second for extra luck!
	I0115 05:52:45.691407   71143 start.go:125] createHost starting for "" (driver="docker")
	I0115 05:52:45.713465   71143 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 05:52:45.713643   71143 start.go:159] libmachine.API.Create for "multinode-456000" (driver="docker")
	I0115 05:52:45.713691   71143 client.go:168] LocalClient.Create starting
	I0115 05:52:45.713886   71143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 05:52:45.713982   71143 main.go:141] libmachine: Decoding PEM data...
	I0115 05:52:45.714014   71143 main.go:141] libmachine: Parsing certificate...
	I0115 05:52:45.714134   71143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 05:52:45.714222   71143 main.go:141] libmachine: Decoding PEM data...
	I0115 05:52:45.714239   71143 main.go:141] libmachine: Parsing certificate...
	I0115 05:52:45.735748   71143 cli_runner.go:164] Run: docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 05:52:45.788227   71143 cli_runner.go:211] docker network inspect multinode-456000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 05:52:45.788325   71143 network_create.go:281] running [docker network inspect multinode-456000] to gather additional debugging logs...
	I0115 05:52:45.788351   71143 cli_runner.go:164] Run: docker network inspect multinode-456000
	W0115 05:52:45.838936   71143 cli_runner.go:211] docker network inspect multinode-456000 returned with exit code 1
	I0115 05:52:45.838970   71143 network_create.go:284] error running [docker network inspect multinode-456000]: docker network inspect multinode-456000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-456000 not found
	I0115 05:52:45.838990   71143 network_create.go:286] output of [docker network inspect multinode-456000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-456000 not found
	
	** /stderr **
	I0115 05:52:45.839134   71143 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 05:52:45.892672   71143 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 05:52:45.893048   71143 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002472180}
	I0115 05:52:45.893071   71143 network_create.go:124] attempt to create docker network multinode-456000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0115 05:52:45.893140   71143 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-456000 multinode-456000
	I0115 05:52:45.978549   71143 network_create.go:108] docker network multinode-456000 192.168.58.0/24 created
	I0115 05:52:45.978594   71143 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-456000" container
	I0115 05:52:45.978716   71143 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 05:52:46.030048   71143 cli_runner.go:164] Run: docker volume create multinode-456000 --label name.minikube.sigs.k8s.io=multinode-456000 --label created_by.minikube.sigs.k8s.io=true
	I0115 05:52:46.079836   71143 oci.go:103] Successfully created a docker volume multinode-456000
	I0115 05:52:46.079956   71143 cli_runner.go:164] Run: docker run --rm --name multinode-456000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-456000 --entrypoint /usr/bin/test -v multinode-456000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 05:52:46.373877   71143 oci.go:107] Successfully prepared a docker volume multinode-456000
	I0115 05:52:46.373918   71143 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:52:46.373932   71143 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 05:52:46.374029   71143 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-456000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-456000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-456000
helpers_test.go:235: (dbg) docker inspect multinode-456000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-456000",
	        "Id": "098b5e2c52aff468e4971dcd32317a6dc6b86c6bcc41e459f7b086e9a6783c92",
	        "Created": "2024-01-15T13:52:45.941910313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-456000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-456000 -n multinode-456000: exit status 7 (108.904626ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 05:55:04.970410   71266 status.go:249] status error: host: state: unknown state "multinode-456000": docker container inspect multinode-456000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-456000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-456000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (157.31s)

                                                
                                    
x
+
TestScheduledStopUnix (300.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-426000 --memory=2048 --driver=docker 
E0115 05:59:37.089558   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:59:54.018257   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:00:56.755244   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-426000 --memory=2048 --driver=docker : signal: killed (5m0.003532926s)

                                                
                                                
-- stdout --
	* [scheduled-stop-426000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-426000 in cluster scheduled-stop-426000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-426000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-426000 in cluster scheduled-stop-426000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2024-01-15 06:02:20.089749 -0800 PST m=+3636.270785691
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-426000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-426000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-426000",
	        "Id": "1d5e4c905c6a6fe89114baa7f653e819ca1a0e8f319d1606afefe196d72e2213",
	        "Created": "2024-01-15T13:57:21.126789666Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-426000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-426000 -n scheduled-stop-426000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-426000 -n scheduled-stop-426000: exit status 7 (110.420588ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 06:02:20.253233   71810 status.go:249] status error: host: state: unknown state "scheduled-stop-426000": docker container inspect scheduled-stop-426000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-426000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-426000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-426000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-426000
--- FAIL: TestScheduledStopUnix (300.90s)

                                                
                                    
x
+
TestSkaffold (300.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe943522914 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-709000 --memory=2600 --driver=docker 
E0115 06:04:54.002678   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:05:39.797368   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:05:56.736952   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-709000 --memory=2600 --driver=docker : signal: killed (4m58.050228083s)

                                                
                                                
-- stdout --
	* [skaffold-709000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-709000 in cluster skaffold-709000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-709000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-709000 in cluster skaffold-709000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2024-01-15 06:07:20.971668 -0800 PST m=+3937.169230456
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-709000
helpers_test.go:235: (dbg) docker inspect skaffold-709000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-709000",
	        "Id": "7b542e7d056026e34a0659f54980b595dc59c38e90d0924bf25c8e43f40cc5b7",
	        "Created": "2024-01-15T14:02:24.016073991Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-709000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-709000 -n skaffold-709000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-709000 -n skaffold-709000: exit status 7 (109.984715ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 06:07:21.134490   71962 status.go:249] status error: host: state: unknown state "skaffold-709000": docker container inspect skaffold-709000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-709000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-709000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-709000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-709000
--- FAIL: TestSkaffold (300.91s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-570000 --memory=2048 --output=json --wait=true --driver=docker 
E0115 06:09:53.984349   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:10:56.720782   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-570000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003936644s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c573b21-425f-402d-9ed8-f11dd519bf43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-570000] minikube v1.32.0 on Darwin 14.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91dc534b-35be-491a-acbb-7d1147e3dadf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"2c862bf1-1c35-4d5f-96bb-98f5202dc10a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig"}}
	{"specversion":"1.0","id":"d6658979-9f90-4a72-b814-daed9ef51f85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"6dd63a1e-3adb-4b8f-9852-4840aab36f34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1ee1f11c-d742-45c4-8a17-851a6ad30da5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube"}}
	{"specversion":"1.0","id":"d66373d4-fc7f-44d0-a1fa-5ec73dd96d17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dfd224fe-f31d-4aed-ab66-a02aee2ccd95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9c99be9d-bc70-4c77-9ff9-582aebaf1aed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c1383ef5-c163-4e9a-a0ad-7cf8663482f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c53c3513-7c43-433e-b090-bb266be26329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"c622985d-c19b-4188-820c-d29217f26d79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-570000 in cluster insufficient-storage-570000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d929a3ff-98b6-474b-ace4-66182dd06e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9425a77d-6648-46a7-ac6b-d8eee7c00029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-570000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-570000 --output=json --layout=cluster: context deadline exceeded (626ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-570000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-570000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (7200.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.413423709 start -p running-upgrade-417000 --memory=2200 --vm-driver=docker 
E0115 06:54:54.192413   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:55:40.104415   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:55:57.043696   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:59:54.300396   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 07:00:57.038684   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (49m21s)
	TestNetworkPlugins/group (49m21s)
	TestRunningBinaryUpgrade (10m9s)
	TestStoppedBinaryUpgrade (23m52s)
	TestStoppedBinaryUpgrade/Upgrade (23m51s)

                                                
                                                
goroutine 2064 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 36 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000103520, 0xc00067fb80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0004abc20?, {0x526cc60, 0x2a, 0x2a}, {0x10b0185?, 0xc0001900c0?, 0x528e4e0?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0004abc20)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 6 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000676800)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 175 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fa9ce0, 0xc0001841e0}, 0xc000115750, 0xc000a7f318?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fa9ce0, 0xc0001841e0}, 0x1?, 0x1?, 0xc0001157b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fa9ce0?, 0xc0001841e0?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0001157d0?, 0x117be07?, 0xc000a25dd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 196
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2054 [select, 11 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026ea000, 0xc002b62840)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2048
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 944 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020d2840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 855
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 195 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a7ed20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 660 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0000076c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0000076c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc0000076c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc0000076c0, 0x3b38728)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2088 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4cab73e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0023fa300?, 0xc000819863?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0023fa300, {0xc000819863, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0026c6288, {0xc000819863?, 0x0?, 0xc00218be68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021a88a0, {0x3f85e00, 0xc0026c6288})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f85e80, 0xc0021a88a0}, {0x3f85e00, 0xc0026c6288}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x3b38701?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1885
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 83 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 82
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 656 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000103a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000103a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000103a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc000103a00, 0x3b38710)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 655 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000103380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000103380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc000103380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc000103380, 0x3b386e0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 653 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000103040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000103040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc000103040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc000103040, 0x3b386d0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1101 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026eb8c0, 0xc0026ab140)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1100
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 654 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0001031e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0001031e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0001031e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0001031e0, 0x3b386c8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1284 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002c32160, 0xc002b63020)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 842
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 196 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009dab00, 0xc0001841e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 657 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000583ba0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000583ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000583ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc000583ba0, 0x3b38708)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 174 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0009daad0, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f82e00?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a7eba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009dab00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00008c790?, {0x3f87300, 0xc0009b9b30}, 0x1, 0xc0001841e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023ff0e0?, 0x3b9aca00, 0x0, 0xd0?, 0x104475c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bda5?, 0xc000ab3b80?, 0xc0023ff500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 196
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 176 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 175
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 661 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005236c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005236c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc0005236c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc0005236c0, 0x3b38730)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1842 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002c81040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002c81040, 0x3b387c8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 731 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x4cab71f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0006e2880?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0006e2880)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0006e2880)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0020ac8a0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc0020ac8a0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc000672ff0, {0x3f9d500, 0xc0020ac8a0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc000672ff0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0022781a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 728
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 1229 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002b77ce0, 0xc002b62b40)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1228
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1824 [chan receive, 51 minutes]:
testing.(*T).Run(0xc002c80b60, {0x30e2369?, 0x1e9a9620bb66?}, 0xc002b70288)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002c80b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002c80b60, 0x3b387b0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1914 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0023901a0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0023901a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0023901a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0023901a0, 0xc000a89080)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 978 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fa9ce0, 0xc0001841e0}, 0xc00218b750, 0xc000984f58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fa9ce0, 0xc0001841e0}, 0x1?, 0x1?, 0xc00218b7b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fa9ce0?, 0xc0001841e0?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00218b7d0?, 0x117be07?, 0xc0008784b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1911 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c81a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c81a00, 0xc000a88e00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1912 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81ba0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c81ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c81ba0, 0xc000a88e80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 961 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0023659c0, 0xc0001841e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 855
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2089 [select, 11 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ab3760, 0xc002277920)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1885
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2048 [syscall, 11 minutes]:
syscall.syscall6(0x1010585?, 0xc0008f3788?, 0xc0008f3678?, 0xc0008f37a8?, 0x100c0008f3770?, 0x1000000000003?, 0x59c9fc8?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0008f3720?, 0x1010905?, 0x90?, 0x30505e0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc000a7c6e0?, 0xc0008f3754, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002c3a0c0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026ea000)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000192340?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000192340, 0xc0026ea000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x36f
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0008f3c18?, {0x3f93330, 0xc0020ac9e0}, 0x3b39778, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x13c
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0xc00226ac80?, {0x3f93330?, 0xc0020ac9e0?}, 0x1016f12?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc00226aec0?, 0x3b9aca00, 0x1a3185c5000, {0xc00226ad08?, 0x2c03760?, 0x40b4a00?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000192340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc000192340, 0xc0023646c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1886
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1906 [chan receive, 51 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc002c80d00, 0xc002b70288)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1824
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1885 [syscall, 11 minutes]:
syscall.syscall6(0x1010585?, 0xc000a6f7a8?, 0xc000a6f698?, 0xc000a6f7c8?, 0x100c000a6f790?, 0x1000000000003?, 0x4c556530?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000a6f740?, 0x1010905?, 0x90?, 0x30505e0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc000547340?, 0xc000a6f774, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002088960)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ab3760)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000682b60?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000682b60, 0xc000ab3760)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:120 +0x36d
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc000a6fc38?, {0x3f93330, 0xc0020aca20}, 0x3b39778, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x13c
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0xc000a6fca0?, {0x3f93330?, 0xc0020aca20?}, 0x1016f12?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc000a6feb0?, 0x3b9aca00, 0x1a3185c5000, {0xc000a6fd10?, 0x2c03760?, 0x51b5600?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc000682b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:125 +0x4f7
testing.tRunner(0xc000682b60, 0x3b387d8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1908 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c81520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c81520, 0xc000a88b80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1841 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c80ea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c80ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002c80ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002c80ea0, 0x3b387b8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2052 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4cab74d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002314420?, 0xc002395b18?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002314420, {0xc002395b18, 0x4e8, 0x4e8})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002626080, {0xc002395b18?, 0xc0022bc668?, 0xc0022bc668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0023101e0, {0x3f85e00, 0xc002626080})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f85e80, 0xc0023101e0}, {0x3f85e00, 0xc002626080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0022765a0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2048
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1913 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81d40)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c81d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c81d40, 0xc000a89000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 977 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002365990, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f82e00?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020d2660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0023659c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1c31079efcd2?, {0x3f87300, 0xc000a240c0}, 0x1, 0xc0001841e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002276960?, 0x3b9aca00, 0x0, 0xd0?, 0x104475c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bda5?, 0xc002232840?, 0xc0023ff740?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 979 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 978
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1909 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c816c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c816c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c816c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c816c0, 0xc000a88d00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2087 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x4d315110, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0023fa240?, 0xc00067eb06?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0023fa240, {0xc00067eb06, 0x4fa, 0x4fa})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0026c6270, {0xc00067eb06?, 0xc0005829c0?, 0xc002189e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021a8870, {0x3f85e00, 0xc0026c6270})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f85e80, 0xc0021a8870}, {0x3f85e00, 0xc0026c6270}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00020f200?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1885
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1910 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81860)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c81860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c81860, 0xc000a88d80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1299 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002c32420, 0xc002b62300)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1298
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1886 [chan receive, 24 minutes]:
testing.(*T).Run(0xc000682d00, {0x30e6479?, 0x30ed01e?}, 0xc0023646c0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000682d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc000682d00, 0x3b38800)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1907 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002c81380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002c81380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002c81380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002c81380, 0xc000a88780)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1290 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc00285c240)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1303
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 1898 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000682680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000682680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc002314240?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000682680, 0x3b387f8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2053 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4cab6a30, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0023145a0?, 0xc00081946d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0023145a0, {0xc00081946d, 0x393, 0x393})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0026260c8, {0xc00081946d?, 0x458ca4f?, 0xc0022b6668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002310210, {0x3f85e00, 0xc0026260c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f85e80, 0xc002310210}, {0x3f85e00, 0xc0026260c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000a89100?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2048
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1915 [chan receive, 51 minutes]:
testing.(*testContext).waitParallel(0xc0009c1a40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002391380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002391380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002391380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002391380, 0xc000a89100)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1906
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1289 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc00285c240)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1303
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                    
x
+
TestKubernetesUpgrade (772.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-430000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-430000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 52 (12m37.295371367s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-430000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-430000 in cluster kubernetes-upgrade-430000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-430000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 06:24:59.624529   73110 out.go:296] Setting OutFile to fd 1 ...
	I0115 06:24:59.624759   73110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 06:24:59.624764   73110 out.go:309] Setting ErrFile to fd 2...
	I0115 06:24:59.624768   73110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 06:24:59.624958   73110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 06:24:59.626552   73110 out.go:303] Setting JSON to false
	I0115 06:24:59.649029   73110 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":35442,"bootTime":1705293257,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 06:24:59.649150   73110 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 06:24:59.671080   73110 out.go:177] * [kubernetes-upgrade-430000] minikube v1.32.0 on Darwin 14.2.1
	I0115 06:24:59.692964   73110 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 06:24:59.692998   73110 notify.go:220] Checking for updates...
	I0115 06:24:59.736732   73110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 06:24:59.758861   73110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 06:24:59.780724   73110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 06:24:59.802718   73110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 06:24:59.824890   73110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 06:24:59.847684   73110 config.go:182] Loaded profile config "missing-upgrade-544000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0115 06:24:59.847830   73110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 06:24:59.905099   73110 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 06:24:59.905256   73110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 06:25:00.007123   73110 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:12 ContainersRunning:0 ContainersPaused:0 ContainersStopped:12 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:183 SystemTime:2024-01-15 14:24:59.996837684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 06:25:00.028539   73110 out.go:177] * Using the docker driver based on user configuration
	I0115 06:25:00.049577   73110 start.go:298] selected driver: docker
	I0115 06:25:00.049609   73110 start.go:902] validating driver "docker" against <nil>
	I0115 06:25:00.049625   73110 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 06:25:00.054326   73110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 06:25:00.158067   73110 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:12 ContainersRunning:0 ContainersPaused:0 ContainersStopped:12 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:183 SystemTime:2024-01-15 14:25:00.148141282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 06:25:00.158260   73110 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 06:25:00.158463   73110 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 06:25:00.179850   73110 out.go:177] * Using Docker Desktop driver with root privileges
	I0115 06:25:00.201631   73110 cni.go:84] Creating CNI manager for ""
	I0115 06:25:00.201653   73110 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0115 06:25:00.201663   73110 start_flags.go:321] config:
	{Name:kubernetes-upgrade-430000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-430000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 06:25:00.223007   73110 out.go:177] * Starting control plane node kubernetes-upgrade-430000 in cluster kubernetes-upgrade-430000
	I0115 06:25:00.266723   73110 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 06:25:00.287950   73110 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 06:25:00.309919   73110 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0115 06:25:00.310009   73110 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0115 06:25:00.310018   73110 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 06:25:00.310036   73110 cache.go:56] Caching tarball of preloaded images
	I0115 06:25:00.310257   73110 preload.go:174] Found /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 06:25:00.310277   73110 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0115 06:25:00.310419   73110 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/kubernetes-upgrade-430000/config.json ...
	I0115 06:25:00.311189   73110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/kubernetes-upgrade-430000/config.json: {Name:mkf7d835c71978f0ab0b93034a523b1997f1452c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 06:25:00.363465   73110 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 06:25:00.363487   73110 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 06:25:00.363510   73110 cache.go:194] Successfully downloaded all kic artifacts
	I0115 06:25:00.363567   73110 start.go:365] acquiring machines lock for kubernetes-upgrade-430000: {Name:mkd506aad70046d778f113878605bcb47938df38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 06:25:00.363722   73110 start.go:369] acquired machines lock for "kubernetes-upgrade-430000" in 141.981µs
	I0115 06:25:00.363747   73110 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-430000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-430000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0115 06:25:00.363826   73110 start.go:125] createHost starting for "" (driver="docker")
	I0115 06:25:00.409350   73110 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 06:25:00.409705   73110 start.go:159] libmachine.API.Create for "kubernetes-upgrade-430000" (driver="docker")
	I0115 06:25:00.409756   73110 client.go:168] LocalClient.Create starting
	I0115 06:25:00.409972   73110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 06:25:00.410060   73110 main.go:141] libmachine: Decoding PEM data...
	I0115 06:25:00.410094   73110 main.go:141] libmachine: Parsing certificate...
	I0115 06:25:00.410188   73110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 06:25:00.410280   73110 main.go:141] libmachine: Decoding PEM data...
	I0115 06:25:00.410297   73110 main.go:141] libmachine: Parsing certificate...
	I0115 06:25:00.411346   73110 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-430000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 06:25:00.462714   73110 cli_runner.go:211] docker network inspect kubernetes-upgrade-430000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 06:25:00.462840   73110 network_create.go:281] running [docker network inspect kubernetes-upgrade-430000] to gather additional debugging logs...
	I0115 06:25:00.462858   73110 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-430000
	W0115 06:25:00.513864   73110 cli_runner.go:211] docker network inspect kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:25:00.513888   73110 network_create.go:284] error running [docker network inspect kubernetes-upgrade-430000]: docker network inspect kubernetes-upgrade-430000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-430000 not found
	I0115 06:25:00.513900   73110 network_create.go:286] output of [docker network inspect kubernetes-upgrade-430000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-430000 not found
	
	** /stderr **
	I0115 06:25:00.514047   73110 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 06:25:00.566867   73110 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:25:00.567207   73110 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f2f70}
	I0115 06:25:00.567225   73110 network_create.go:124] attempt to create docker network kubernetes-upgrade-430000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0115 06:25:00.567297   73110 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 kubernetes-upgrade-430000
	W0115 06:25:00.619285   73110 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:25:00.619324   73110 network_create.go:149] failed to create docker network kubernetes-upgrade-430000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 kubernetes-upgrade-430000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0115 06:25:00.619343   73110 network_create.go:116] failed to create docker network kubernetes-upgrade-430000 192.168.58.0/24, will retry: subnet is taken
	I0115 06:25:00.620815   73110 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:25:00.621136   73110 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f3d40}
	I0115 06:25:00.621147   73110 network_create.go:124] attempt to create docker network kubernetes-upgrade-430000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0115 06:25:00.621210   73110 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 kubernetes-upgrade-430000
	I0115 06:25:00.707197   73110 network_create.go:108] docker network kubernetes-upgrade-430000 192.168.67.0/24 created
	I0115 06:25:00.707240   73110 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-430000" container
	I0115 06:25:00.707373   73110 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 06:25:00.761434   73110 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-430000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 --label created_by.minikube.sigs.k8s.io=true
	I0115 06:25:00.813199   73110 oci.go:103] Successfully created a docker volume kubernetes-upgrade-430000
	I0115 06:25:00.813313   73110 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-430000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 --entrypoint /usr/bin/test -v kubernetes-upgrade-430000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 06:25:01.215618   73110 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-430000
	I0115 06:25:01.215674   73110 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0115 06:25:01.215688   73110 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 06:25:01.215784   73110 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-430000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 06:31:00.537104   73110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 06:31:00.537246   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:00.592522   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:00.592639   73110 retry.go:31] will retry after 366.07741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:00.959344   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:01.010110   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:01.010235   73110 retry.go:31] will retry after 420.71296ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:01.432106   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:01.483789   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:01.483909   73110 retry.go:31] will retry after 537.223034ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:02.021700   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:02.074252   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:31:02.074356   73110 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	W0115 06:31:02.074378   73110 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:02.074437   73110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 06:31:02.074496   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:02.125415   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:02.125509   73110 retry.go:31] will retry after 192.01642ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:02.317822   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:02.371261   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:02.371371   73110 retry.go:31] will retry after 290.031639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:02.661829   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:02.716072   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:02.716175   73110 retry.go:31] will retry after 753.355732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:03.469913   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:31:03.522643   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:31:03.522740   73110 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	W0115 06:31:03.522758   73110 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:03.522773   73110 start.go:128] duration metric: createHost completed in 6m3.033370857s
	I0115 06:31:03.522782   73110 start.go:83] releasing machines lock for "kubernetes-upgrade-430000", held for 6m3.03349383s
	W0115 06:31:03.522794   73110 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I0115 06:31:03.523246   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:03.574121   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:03.574182   73110 delete.go:82] Unable to get host status for kubernetes-upgrade-430000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	W0115 06:31:03.574285   73110 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0115 06:31:03.574294   73110 start.go:709] Will try again in 5 seconds ...
	I0115 06:31:08.576421   73110 start.go:365] acquiring machines lock for kubernetes-upgrade-430000: {Name:mkd506aad70046d778f113878605bcb47938df38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 06:31:08.577375   73110 start.go:369] acquired machines lock for "kubernetes-upgrade-430000" in 861.819µs
	I0115 06:31:08.577488   73110 start.go:96] Skipping create...Using existing machine configuration
	I0115 06:31:08.577505   73110 fix.go:54] fixHost starting: 
	I0115 06:31:08.578014   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:08.631325   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:08.631373   73110 fix.go:102] recreateIfNeeded on kubernetes-upgrade-430000: state= err=unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:08.631393   73110 fix.go:107] machineExists: false. err=machine does not exist
	I0115 06:31:08.653515   73110 out.go:177] * docker "kubernetes-upgrade-430000" container is missing, will recreate.
	I0115 06:31:08.674976   73110 delete.go:124] DEMOLISHING kubernetes-upgrade-430000 ...
	I0115 06:31:08.675173   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:08.727358   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	W0115 06:31:08.727413   73110 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:08.727435   73110 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:08.727822   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:08.778261   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:08.778313   73110 delete.go:82] Unable to get host status for kubernetes-upgrade-430000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:08.778403   73110 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-430000
	W0115 06:31:08.828816   73110 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:08.828865   73110 kic.go:371] could not find the container kubernetes-upgrade-430000 to remove it. will try anyways
	I0115 06:31:08.828949   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:08.879782   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	W0115 06:31:08.879843   73110 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:08.879932   73110 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-430000 /bin/bash -c "sudo init 0"
	W0115 06:31:08.930725   73110 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-430000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0115 06:31:08.930765   73110 oci.go:650] error shutdown kubernetes-upgrade-430000: docker exec --privileged -t kubernetes-upgrade-430000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:09.931382   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:09.986305   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:09.986360   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:09.986383   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:09.986409   73110 retry.go:31] will retry after 272.218796ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:10.259398   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:10.313946   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:10.313995   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:10.314009   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:10.314036   73110 retry.go:31] will retry after 959.41662ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:11.274282   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:11.327459   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:11.327515   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:11.327527   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:11.327553   73110 retry.go:31] will retry after 1.47201878s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:12.799874   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:12.851298   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:12.851353   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:12.851364   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:12.851398   73110 retry.go:31] will retry after 1.283340619s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:14.135050   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:14.191814   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:14.191867   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:14.191881   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:14.191905   73110 retry.go:31] will retry after 2.884092618s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:17.076629   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:17.129573   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:17.129624   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:17.129636   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:17.129662   73110 retry.go:31] will retry after 3.890208664s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:21.021476   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:21.076316   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:21.076374   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:21.076385   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:21.076413   73110 retry.go:31] will retry after 2.915922374s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:23.993188   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:24.046799   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:24.046854   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:24.046864   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:24.046888   73110 retry.go:31] will retry after 4.980769272s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:29.027947   73110 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}
	W0115 06:31:29.081887   73110 cli_runner.go:211] docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}} returned with exit code 1
	I0115 06:31:29.081940   73110 oci.go:662] temporary error verifying shutdown: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:31:29.081950   73110 oci.go:664] temporary error: container kubernetes-upgrade-430000 status is  but expect it to be exited
	I0115 06:31:29.081990   73110 oci.go:88] couldn't shut down kubernetes-upgrade-430000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	 
	I0115 06:31:29.082069   73110 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-430000
	I0115 06:31:29.133474   73110 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-430000
	W0115 06:31:29.183899   73110 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:29.184018   73110 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-430000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 06:31:29.234860   73110 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-430000
	I0115 06:31:29.330003   73110 fix.go:114] Sleeping 1 second for extra luck!
	I0115 06:31:30.332141   73110 start.go:125] createHost starting for "" (driver="docker")
	I0115 06:31:30.355112   73110 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 06:31:30.355288   73110 start.go:159] libmachine.API.Create for "kubernetes-upgrade-430000" (driver="docker")
	I0115 06:31:30.355333   73110 client.go:168] LocalClient.Create starting
	I0115 06:31:30.355580   73110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/ca.pem
	I0115 06:31:30.355667   73110 main.go:141] libmachine: Decoding PEM data...
	I0115 06:31:30.355691   73110 main.go:141] libmachine: Parsing certificate...
	I0115 06:31:30.355764   73110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17953-64881/.minikube/certs/cert.pem
	I0115 06:31:30.355832   73110 main.go:141] libmachine: Decoding PEM data...
	I0115 06:31:30.355847   73110 main.go:141] libmachine: Parsing certificate...
	I0115 06:31:30.356437   73110 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-430000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 06:31:30.410288   73110 cli_runner.go:211] docker network inspect kubernetes-upgrade-430000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 06:31:30.410387   73110 network_create.go:281] running [docker network inspect kubernetes-upgrade-430000] to gather additional debugging logs...
	I0115 06:31:30.410404   73110 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-430000
	W0115 06:31:30.461378   73110 cli_runner.go:211] docker network inspect kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:31:30.461411   73110 network_create.go:284] error running [docker network inspect kubernetes-upgrade-430000]: docker network inspect kubernetes-upgrade-430000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-430000 not found
	I0115 06:31:30.461425   73110 network_create.go:286] output of [docker network inspect kubernetes-upgrade-430000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-430000 not found
	
	** /stderr **
	I0115 06:31:30.461582   73110 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 06:31:30.514735   73110 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:31:30.516056   73110 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:31:30.517403   73110 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0115 06:31:30.517732   73110 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f75f0}
	I0115 06:31:30.517744   73110 network_create.go:124] attempt to create docker network kubernetes-upgrade-430000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0115 06:31:30.517822   73110 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 kubernetes-upgrade-430000
	I0115 06:31:30.603920   73110 network_create.go:108] docker network kubernetes-upgrade-430000 192.168.76.0/24 created
	I0115 06:31:30.603955   73110 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-430000" container
	I0115 06:31:30.604074   73110 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 06:31:30.658686   73110 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-430000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 --label created_by.minikube.sigs.k8s.io=true
	I0115 06:31:30.709439   73110 oci.go:103] Successfully created a docker volume kubernetes-upgrade-430000
	I0115 06:31:30.709560   73110 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-430000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-430000 --entrypoint /usr/bin/test -v kubernetes-upgrade-430000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 06:31:31.005978   73110 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-430000
	I0115 06:31:31.006014   73110 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0115 06:31:31.006026   73110 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 06:31:31.006127   73110 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-430000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 06:37:30.350002   73110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 06:37:30.350099   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:30.403274   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:30.403395   73110 retry.go:31] will retry after 312.128563ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:30.716809   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:30.772184   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:30.772287   73110 retry.go:31] will retry after 407.845936ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:31.182435   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:31.233545   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:31.233646   73110 retry.go:31] will retry after 796.805542ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:32.031767   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:32.086394   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:37:32.086530   73110 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	W0115 06:37:32.086548   73110 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:32.086606   73110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 06:37:32.086680   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:32.137188   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:32.137281   73110 retry.go:31] will retry after 158.049684ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:32.296138   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:32.351132   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:32.351244   73110 retry.go:31] will retry after 488.20628ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:32.841213   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:32.895599   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:32.895707   73110 retry.go:31] will retry after 553.751267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:33.449626   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:33.503557   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:37:33.503678   73110 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	W0115 06:37:33.503698   73110 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:33.503707   73110 start.go:128] duration metric: createHost completed in 6m3.177464718s
	I0115 06:37:33.503777   73110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 06:37:33.503840   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:33.554910   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:33.555007   73110 retry.go:31] will retry after 366.168942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:33.921364   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:33.982038   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:33.982153   73110 retry.go:31] will retry after 480.192338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:34.462736   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:34.516591   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:34.516685   73110 retry.go:31] will retry after 730.403561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:35.249323   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:35.301924   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:37:35.302026   73110 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	W0115 06:37:35.302054   73110 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:35.302127   73110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 06:37:35.302190   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:35.352791   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:35.352889   73110 retry.go:31] will retry after 160.169144ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:35.513868   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:35.569044   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:35.569134   73110 retry.go:31] will retry after 338.756384ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:35.908513   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:35.963490   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	I0115 06:37:35.963586   73110 retry.go:31] will retry after 809.335769ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:36.775268   73110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000
	W0115 06:37:36.828027   73110 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000 returned with exit code 1
	W0115 06:37:36.828129   73110 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	W0115 06:37:36.828147   73110 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-430000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-430000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	I0115 06:37:36.828169   73110 fix.go:56] fixHost completed within 6m28.257018019s
	I0115 06:37:36.828179   73110 start.go:83] releasing machines lock for "kubernetes-upgrade-430000", held for 6m28.25708044s
	W0115 06:37:36.828260   73110 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-430000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-430000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0115 06:37:36.871794   73110 out.go:177] 
	W0115 06:37:36.893947   73110 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0115 06:37:36.894007   73110 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0115 06:37:36.894050   73110 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0115 06:37:36.915671   73110 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-430000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 52
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-430000
version_upgrade_test.go:227: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-430000: exit status 82 (14.342031308s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-430000"  ...
	* Stopping node "kubernetes-upgrade-430000"  ...
	* Stopping node "kubernetes-upgrade-430000"  ...
	* Stopping node "kubernetes-upgrade-430000"  ...
	* Stopping node "kubernetes-upgrade-430000"  ...
	* Stopping node "kubernetes-upgrade-430000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect kubernetes-upgrade-430000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:229: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-430000 failed: exit status 82
panic.go:523: *** TestKubernetesUpgrade FAILED at 2024-01-15 06:37:51.336766 -0800 PST m=+5767.319577439
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-430000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-430000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-430000",
	        "Id": "2d2b2384ce581d9bf332425489a1e53641820e8dfa20987b141c87e4ae8acb6b",
	        "Created": "2024-01-15T14:31:30.565036328Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "kubernetes-upgrade-430000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-430000 -n kubernetes-upgrade-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-430000 -n kubernetes-upgrade-430000: exit status 7 (109.665407ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 06:37:51.503626   73887 status.go:249] status error: host: state: unknown state "kubernetes-upgrade-430000": docker container inspect kubernetes-upgrade-430000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: kubernetes-upgrade-430000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-430000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-430000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-430000
--- FAIL: TestKubernetesUpgrade (772.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (2346.68s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3813350043 start -p missing-upgrade-544000 --memory=2200 --driver=docker 
E0115 06:14:54.111078   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:15:56.846735   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:16:17.180092   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:19:54.100845   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:20:56.837643   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:22:19.894723   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:24:54.089004   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3813350043 start -p missing-upgrade-544000 --memory=2200 --driver=docker : exit status 52 (13m14.939223242s)

                                                
                                                
-- stdout --
	* [missing-upgrade-544000] minikube v1.26.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node missing-upgrade-544000 in cluster missing-upgrade-544000
	* Pulling base image ...
	* Downloading Kubernetes v1.24.1 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-544000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB [>____] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB [>____] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 8.03 MiB / 386.00 MiB [>_____] 2.08% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 16.03 MiB / 386.00 MiB  4.15% 26.72 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 26.72 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 26.72 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 26.20 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 26.20 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 26.20 MiB p/s     > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 24.51 MiB p/s     > gcr.io/k8s-minikube/kicbase: 50.91 MiB / 386.00 MiB  13.19% 24.51 MiB p/s    > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 24.51 MiB p/s    > gcr.io/k8s-minikube/kicbase: 59.19 MiB / 386.00 MiB  15.33% 2
6.36 MiB p/s    > gcr.io/k8s-minikube/kicbase: 60.10 MiB / 386.00 MiB  15.57% 26.36 MiB p/s    > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 26.36 MiB p/s    > gcr.io/k8s-minikube/kicbase: 75.46 MiB / 386.00 MiB  19.55% 26.41 MiB p/s    > gcr.io/k8s-minikube/kicbase: 79.66 MiB / 386.00 MiB  20.64% 26.41 MiB p/s    > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 26.41 MiB p/s    > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 26.69 MiB p/s    > gcr.io/k8s-minikube/kicbase: 101.90 MiB / 386.00 MiB  26.40% 26.69 MiB p/    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 26.69 MiB p/    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 27.56 MiB p/    > gcr.io/k8s-minikube/kicbase: 125.98 MiB / 386.00 MiB  32.64% 27.56 MiB p/    > gcr.io/k8s-minikube/kicbase: 136.25 MiB / 386.00 MiB  35.30% 27.56 MiB p/    > gcr.io/k8s-minikube/kicbase: 157.98 MiB / 386.00 MiB  40.93% 30.08 MiB p/    > gcr.io/k8s-minikube/kicbase: 189.98 MiB / 386.00 MiB  49.2
2% 30.08 MiB p/    > gcr.io/k8s-minikube/kicbase: 197.98 MiB / 386.00 MiB  51.29% 30.08 MiB p/    > gcr.io/k8s-minikube/kicbase: 221.98 MiB / 386.00 MiB  57.51% 35.02 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 35.02 MiB p/    > gcr.io/k8s-minikube/kicbase: 237.80 MiB / 386.00 MiB  61.61% 35.02 MiB p/    > gcr.io/k8s-minikube/kicbase: 248.86 MiB / 386.00 MiB  64.47% 35.66 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 35.66 MiB p/    > gcr.io/k8s-minikube/kicbase: 289.29 MiB / 386.00 MiB  74.95% 35.66 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 41.57 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 41.57 MiB p/    > gcr.io/k8s-minikube/kicbase: 333.34 MiB / 386.00 MiB  86.36% 41.57 MiB p/    > gcr.io/k8s-minikube/kicbase: 351.06 MiB / 386.00 MiB  90.95% 41.66 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 41.66 MiB p/    > gcr.io/k8s-minikube/kicbase: 374.16 MiB / 386.00 MiB  9
6.93% 41.66 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 42.72 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 42.72 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 42.72 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 39.97 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 39.97 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 39.97 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 37.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 37.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 37.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 34.98 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 34.98 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 34.98 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB
100.00% 32.72 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 32.72 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 32.72 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 30.61 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 30.61 MiB p    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 30.61 MiB p    > gcr.io/k8s-minikube/kicbase: 386.00 MiB / 386.00 MiB  100.00% 34.32 MiB p    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_______________
____________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [____________
_______________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_________
__________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [________________________] ?% ? p/s 7.7s! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-544000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:309: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3813350043 start -p missing-upgrade-544000 --memory=2200 --driver=docker 
E0115 06:25:56.827013   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:29:54.215222   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:30:56.953740   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:32:57.286422   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:34:54.210295   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:35:56.949231   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3813350043 start -p missing-upgrade-544000 --memory=2200 --driver=docker : exit status 52 (12m53.063750512s)

                                                
                                                
-- stdout --
	* [missing-upgrade-544000] minikube v1.26.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-544000 in cluster missing-upgrade-544000
	* Pulling base image ...
	* docker "missing-upgrade-544000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-544000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-544000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:309: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3813350043 start -p missing-upgrade-544000 --memory=2200 --driver=docker 
E0115 06:39:00.007795   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:39:54.205545   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:40:56.943773   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:44:54.200820   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:45:56.939323   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 06:49:37.270715   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 06:49:54.197156   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3813350043 start -p missing-upgrade-544000 --memory=2200 --driver=docker : exit status 52 (12m53.361473441s)

                                                
                                                
-- stdout --
	* [missing-upgrade-544000] minikube v1.26.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-544000 in cluster missing-upgrade-544000
	* Pulling base image ...
	* docker "missing-upgrade-544000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-544000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-544000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:315: release start failed: exit status 52
panic.go:523: *** TestMissingContainerUpgrade FAILED at 2024-01-15 06:51:34.617224 -0800 PST m=+6590.613499832
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-544000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-544000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "missing-upgrade-544000",
	        "Id": "3c203f75f7cc76a73f7ba4aa92fb9b9407fce5af9957ec640b5dc0b83257039e",
	        "Created": "2024-01-15T14:45:27.861749794Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "missing-upgrade-544000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-544000 -n missing-upgrade-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-544000 -n missing-upgrade-544000: exit status 7 (110.610945ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 06:51:34.780287   74658 status.go:249] status error: host: state: unknown state "missing-upgrade-544000": docker container inspect missing-upgrade-544000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-544000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-544000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-544000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-544000
--- FAIL: TestMissingContainerUpgrade (2346.68s)

                                                
                                    

Test pass (152/197)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
9 TestDownloadOnly/v1.16.0/DeleteAll 0.65
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.38
12 TestDownloadOnly/v1.28.4/json-events 10.42
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.3
18 TestDownloadOnly/v1.28.4/DeleteAll 0.64
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.29.0-rc.2/json-events 9.66
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.3
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.65
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 2.01
30 TestBinaryMirror 1.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 150.42
40 TestAddons/parallel/InspektorGadget 12.31
41 TestAddons/parallel/MetricsServer 5.82
42 TestAddons/parallel/HelmTiller 10.91
44 TestAddons/parallel/CSI 76.86
45 TestAddons/parallel/Headlamp 12.51
46 TestAddons/parallel/CloudSpanner 5.71
47 TestAddons/parallel/LocalPath 55.16
48 TestAddons/parallel/NvidiaDevicePlugin 5.65
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.1
53 TestAddons/StoppedEnableDisable 11.73
64 TestErrorSpam/setup 22.23
65 TestErrorSpam/start 2.05
66 TestErrorSpam/status 1.21
67 TestErrorSpam/pause 1.69
68 TestErrorSpam/unpause 1.79
69 TestErrorSpam/stop 11.47
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 74.65
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 38.07
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
81 TestFunctional/serial/CacheCmd/cache/add_local 1.59
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
86 TestFunctional/serial/CacheCmd/cache/delete 0.16
87 TestFunctional/serial/MinikubeKubectlCmd 0.54
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.76
89 TestFunctional/serial/ExtraConfig 36.77
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3
92 TestFunctional/serial/LogsFileCmd 3.08
93 TestFunctional/serial/InvalidService 4.85
95 TestFunctional/parallel/ConfigCmd 0.53
96 TestFunctional/parallel/DashboardCmd 9.9
97 TestFunctional/parallel/DryRun 1.84
98 TestFunctional/parallel/InternationalLanguage 0.74
99 TestFunctional/parallel/StatusCmd 1.22
104 TestFunctional/parallel/AddonsCmd 0.26
105 TestFunctional/parallel/PersistentVolumeClaim 27.86
107 TestFunctional/parallel/SSHCmd 0.82
108 TestFunctional/parallel/CpCmd 2.84
109 TestFunctional/parallel/MySQL 32.14
110 TestFunctional/parallel/FileSync 0.43
111 TestFunctional/parallel/CertSync 2.6
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
119 TestFunctional/parallel/License 0.68
120 TestFunctional/parallel/Version/short 0.13
121 TestFunctional/parallel/Version/components 1.3
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.65
127 TestFunctional/parallel/ImageCommands/Setup 2.46
128 TestFunctional/parallel/DockerEnv/bash 2.06
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.15
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.31
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.52
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.54
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.83
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.81
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.74
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.67
139 TestFunctional/parallel/ServiceCmd/DeployApp 15.18
140 TestFunctional/parallel/ServiceCmd/List 0.51
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
142 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.19
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
154 TestFunctional/parallel/ServiceCmd/Format 15
155 TestFunctional/parallel/ServiceCmd/URL 15
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
157 TestFunctional/parallel/ProfileCmd/profile_list 0.49
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
161 TestFunctional/parallel/MountCmd/VerifyCleanup 2.46
162 TestFunctional/delete_addon-resizer_images 0.14
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestImageBuild/serial/Setup 22.01
169 TestImageBuild/serial/NormalBuild 1.77
170 TestImageBuild/serial/BuildWithBuildArg 1.29
171 TestImageBuild/serial/BuildWithDockerIgnore 0.75
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.87
182 TestJSONOutput/start/Command 37.32
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.61
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.61
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 10.95
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.78
207 TestKicCustomNetwork/create_custom_network 23.98
208 TestKicCustomNetwork/use_default_bridge_network 24.38
209 TestKicExistingNetwork 23.31
210 TestKicCustomSubnet 23.88
211 TestKicStaticIP 24.21
212 TestMainNoArgs 0.08
213 TestMinikubeProfile 51.69
216 TestMountStart/serial/StartWithMountFirst 7.67
217 TestMountStart/serial/VerifyMountFirst 0.39
218 TestMountStart/serial/StartWithMountSecond 7.21
219 TestMountStart/serial/VerifyMountSecond 0.39
220 TestMountStart/serial/DeleteFirst 2.07
221 TestMountStart/serial/VerifyMountPostDelete 0.38
222 TestMountStart/serial/Stop 1.56
223 TestMountStart/serial/RestartStopped 8.4
224 TestMountStart/serial/VerifyMountPostStop 0.39
243 TestPreload 134.18
x
+
TestDownloadOnly/v1.16.0/json-events (10.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-652000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-652000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (10.74428786s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-652000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-652000: exit status 85 (289.347101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-652000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST |          |
	|         | -p download-only-652000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 05:01:44
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 05:01:44.075300   65632 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:01:44.075600   65632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:01:44.075605   65632 out.go:309] Setting ErrFile to fd 2...
	I0115 05:01:44.075609   65632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:01:44.075784   65632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	W0115 05:01:44.075885   65632 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17953-64881/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17953-64881/.minikube/config/config.json: no such file or directory
	I0115 05:01:44.077702   65632 out.go:303] Setting JSON to true
	I0115 05:01:44.100636   65632 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":30447,"bootTime":1705293257,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:01:44.100742   65632 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:01:44.125703   65632 out.go:97] [download-only-652000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:01:44.146540   65632 out.go:169] MINIKUBE_LOCATION=17953
	I0115 05:01:44.125865   65632 notify.go:220] Checking for updates...
	W0115 05:01:44.125874   65632 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 05:01:44.189770   65632 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:01:44.210450   65632 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:01:44.231570   65632 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:01:44.252901   65632 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	W0115 05:01:44.297597   65632 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 05:01:44.298116   65632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:01:44.355275   65632 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:01:44.355424   65632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:01:44.457807   65632 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:59 SystemTime:2024-01-15 13:01:44.448649127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:01:44.479584   65632 out.go:97] Using the docker driver based on user configuration
	I0115 05:01:44.479643   65632 start.go:298] selected driver: docker
	I0115 05:01:44.479662   65632 start.go:902] validating driver "docker" against <nil>
	I0115 05:01:44.479880   65632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:01:44.588820   65632 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:59 SystemTime:2024-01-15 13:01:44.579356566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:01:44.588990   65632 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 05:01:44.594681   65632 start_flags.go:392] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I0115 05:01:44.594841   65632 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 05:01:44.616457   65632 out.go:169] Using Docker Desktop driver with root privileges
	I0115 05:01:44.637670   65632 cni.go:84] Creating CNI manager for ""
	I0115 05:01:44.637710   65632 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0115 05:01:44.637729   65632 start_flags.go:321] config:
	{Name:download-only-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-652000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:01:44.659444   65632 out.go:97] Starting control plane node download-only-652000 in cluster download-only-652000
	I0115 05:01:44.659518   65632 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:01:44.681216   65632 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:01:44.681275   65632 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0115 05:01:44.681365   65632 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:01:44.733122   65632 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0115 05:01:44.733146   65632 cache.go:56] Caching tarball of preloaded images
	I0115 05:01:44.733377   65632 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0115 05:01:44.733596   65632 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 05:01:44.733796   65632 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 05:01:44.733932   65632 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 05:01:44.754243   65632 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 05:01:44.754258   65632 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:01:44.829229   65632 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0115 05:01:48.578746   65632 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:01:48.578972   65632 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:01:49.133464   65632 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0115 05:01:49.133694   65632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/download-only-652000/config.json ...
	I0115 05:01:49.133720   65632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/download-only-652000/config.json: {Name:mke7109571d3714d89e58187bf4de36a828b65e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:01:49.145638   65632 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0115 05:01:49.146257   65632 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0115 05:01:50.533851   65632 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-652000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-652000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (10.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-005000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-005000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (10.421452311s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (10.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-005000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-005000: exit status 85 (297.417619ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-652000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST |                     |
	|         | -p download-only-652000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 05:01 PST | 15 Jan 24 05:01 PST |
	| delete  | -p download-only-652000        | download-only-652000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST | 15 Jan 24 05:01 PST |
	| start   | -o=json --download-only        | download-only-005000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST |                     |
	|         | -p download-only-005000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 05:01:56
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 05:01:56.136668   65699 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:01:56.136888   65699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:01:56.136892   65699 out.go:309] Setting ErrFile to fd 2...
	I0115 05:01:56.136896   65699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:01:56.137089   65699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:01:56.138686   65699 out.go:303] Setting JSON to true
	I0115 05:01:56.161341   65699 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":30459,"bootTime":1705293257,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:01:56.161452   65699 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:01:56.183321   65699 out.go:97] [download-only-005000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:01:56.204959   65699 out.go:169] MINIKUBE_LOCATION=17953
	I0115 05:01:56.183549   65699 notify.go:220] Checking for updates...
	I0115 05:01:56.248857   65699 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:01:56.269856   65699 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:01:56.290761   65699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:01:56.332908   65699 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	W0115 05:01:56.374892   65699 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 05:01:56.375355   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:01:56.436567   65699 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:01:56.436694   65699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:01:56.536887   65699 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:false NGoroutines:60 SystemTime:2024-01-15 13:01:56.527087209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:01:56.557866   65699 out.go:97] Using the docker driver based on user configuration
	I0115 05:01:56.557905   65699 start.go:298] selected driver: docker
	I0115 05:01:56.557920   65699 start.go:902] validating driver "docker" against <nil>
	I0115 05:01:56.558118   65699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:01:56.661812   65699 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:false NGoroutines:60 SystemTime:2024-01-15 13:01:56.652045452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:01:56.661993   65699 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 05:01:56.664944   65699 start_flags.go:392] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I0115 05:01:56.665092   65699 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 05:01:56.685911   65699 out.go:169] Using Docker Desktop driver with root privileges
	I0115 05:01:56.706803   65699 cni.go:84] Creating CNI manager for ""
	I0115 05:01:56.706845   65699 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0115 05:01:56.706864   65699 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 05:01:56.706886   65699 start_flags.go:321] config:
	{Name:download-only-005000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-005000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:01:56.728757   65699 out.go:97] Starting control plane node download-only-005000 in cluster download-only-005000
	I0115 05:01:56.728839   65699 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:01:56.750792   65699 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:01:56.750850   65699 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:01:56.750917   65699 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:01:56.801369   65699 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0115 05:01:56.801401   65699 cache.go:56] Caching tarball of preloaded images
	I0115 05:01:56.801603   65699 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:01:56.801834   65699 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 05:01:56.801943   65699 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 05:01:56.801959   65699 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 05:01:56.801965   65699 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 05:01:56.801975   65699 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 05:01:56.823003   65699 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0115 05:01:56.823031   65699 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:01:56.904501   65699 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0115 05:02:00.745227   65699 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:02:00.745431   65699 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:02:01.374468   65699 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0115 05:02:01.374709   65699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/download-only-005000/config.json ...
	I0115 05:02:01.374734   65699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/download-only-005000/config.json: {Name:mke66f8f777cd6b0586ecc7cde0900dda2d31fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:02:01.375054   65699 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0115 05:02:01.376395   65699 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-005000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-005000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-444000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-444000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (9.657014567s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-444000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-444000: exit status 85 (297.548143ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-652000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST |                     |
	|         | -p download-only-652000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 05:01 PST | 15 Jan 24 05:01 PST |
	| delete  | -p download-only-652000           | download-only-652000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST | 15 Jan 24 05:01 PST |
	| start   | -o=json --download-only           | download-only-005000 | jenkins | v1.32.0 | 15 Jan 24 05:01 PST |                     |
	|         | -p download-only-005000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 05:02 PST | 15 Jan 24 05:02 PST |
	| delete  | -p download-only-005000           | download-only-005000 | jenkins | v1.32.0 | 15 Jan 24 05:02 PST | 15 Jan 24 05:02 PST |
	| start   | -o=json --download-only           | download-only-444000 | jenkins | v1.32.0 | 15 Jan 24 05:02 PST |                     |
	|         | -p download-only-444000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 05:02:07
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 05:02:07.877155   65764 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:02:07.877455   65764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:02:07.877460   65764 out.go:309] Setting ErrFile to fd 2...
	I0115 05:02:07.877470   65764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:02:07.877653   65764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:02:07.879063   65764 out.go:303] Setting JSON to true
	I0115 05:02:07.901397   65764 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":30470,"bootTime":1705293257,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:02:07.901486   65764 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:02:07.924654   65764 out.go:97] [download-only-444000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:02:07.946122   65764 out.go:169] MINIKUBE_LOCATION=17953
	I0115 05:02:07.924901   65764 notify.go:220] Checking for updates...
	I0115 05:02:07.990290   65764 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:02:08.011034   65764 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:02:08.032422   65764 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:02:08.054384   65764 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	W0115 05:02:08.097299   65764 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 05:02:08.097775   65764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:02:08.158664   65764 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:02:08.158814   65764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:02:08.259128   65764 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:false NGoroutines:60 SystemTime:2024-01-15 13:02:08.247372552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:02:08.280632   65764 out.go:97] Using the docker driver based on user configuration
	I0115 05:02:08.280675   65764 start.go:298] selected driver: docker
	I0115 05:02:08.280691   65764 start.go:902] validating driver "docker" against <nil>
	I0115 05:02:08.280901   65764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:02:08.381976   65764 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:false NGoroutines:60 SystemTime:2024-01-15 13:02:08.372329316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:02:08.382154   65764 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 05:02:08.385128   65764 start_flags.go:392] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I0115 05:02:08.385290   65764 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 05:02:08.406143   65764 out.go:169] Using Docker Desktop driver with root privileges
	I0115 05:02:08.427325   65764 cni.go:84] Creating CNI manager for ""
	I0115 05:02:08.427370   65764 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0115 05:02:08.427392   65764 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 05:02:08.427409   65764 start_flags.go:321] config:
	{Name:download-only-444000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:02:08.449341   65764 out.go:97] Starting control plane node download-only-444000 in cluster download-only-444000
	I0115 05:02:08.449383   65764 cache.go:121] Beginning downloading kic base image for docker with docker
	I0115 05:02:08.470979   65764 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 05:02:08.471047   65764 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0115 05:02:08.471141   65764 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 05:02:08.522499   65764 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 05:02:08.522704   65764 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 05:02:08.522734   65764 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 05:02:08.522740   65764 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 05:02:08.522749   65764 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 05:02:08.525491   65764 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0115 05:02:08.525503   65764 cache.go:56] Caching tarball of preloaded images
	I0115 05:02:08.525678   65764 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0115 05:02:08.547172   65764 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0115 05:02:08.547201   65764 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:02:08.626842   65764 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0115 05:02:11.147755   65764 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:02:11.147936   65764 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0115 05:02:11.693596   65764 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0115 05:02:11.693855   65764 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/download-only-444000/config.json ...
	I0115 05:02:11.693880   65764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/download-only-444000/config.json: {Name:mkbdfb22f5e75658eda3cd6371bdc9aaded025ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 05:02:11.695154   65764 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0115 05:02:11.695434   65764 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17953-64881/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-444000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-444000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-894000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-894000
--- PASS: TestDownloadOnlyKic (2.01s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-183000 --alsologtostderr --binary-mirror http://127.0.0.1:53517 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-183000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-183000
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-744000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-744000: exit status 85 (172.417194ms)

                                                
                                                
-- stdout --
	* Profile "addons-744000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-744000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-744000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-744000: exit status 85 (192.511059ms)

                                                
                                                
-- stdout --
	* Profile "addons-744000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-744000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (150.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-744000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-744000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.421612763s)
--- PASS: TestAddons/Setup (150.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-df99k" [f8b4a408-d589-4deb-8c8c-a49ce080e37d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004268663s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-744000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-744000: (6.303025649s)
--- PASS: TestAddons/parallel/InspektorGadget (12.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.009993ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dn845" [5be85bea-0360-45e0-a35a-1ec1b5d6d4cc] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005363405s
addons_test.go:415: (dbg) Run:  kubectl --context addons-744000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.498219ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-jsnpp" [a97f720d-cb54-470c-a0ed-bce08a1a529c] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004716418s
addons_test.go:473: (dbg) Run:  kubectl --context addons-744000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-744000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.165919267s)
addons_test.go:478: kubectl --context addons-744000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 16.587365ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-744000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-744000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [58b113c9-e939-44bb-bce0-d3df1a426350] Pending
helpers_test.go:344: "task-pv-pod" [58b113c9-e939-44bb-bce0-d3df1a426350] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [58b113c9-e939-44bb-bce0-d3df1a426350] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00495801s
addons_test.go:584: (dbg) Run:  kubectl --context addons-744000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-744000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-744000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-744000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-744000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-744000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-744000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [91b7a393-ea1c-49ec-855c-7f7dca60d1bc] Pending
helpers_test.go:344: "task-pv-pod-restore" [91b7a393-ea1c-49ec-855c-7f7dca60d1bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [91b7a393-ea1c-49ec-855c-7f7dca60d1bc] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003872328s
addons_test.go:626: (dbg) Run:  kubectl --context addons-744000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-744000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-744000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-744000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.771413952s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (76.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-744000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-744000 --alsologtostderr -v=1: (1.49734071s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-gv4lp" [3c41a033-9771-481c-9fd3-94277602d3be] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-gv4lp" [3c41a033-9771-481c-9fd3-94277602d3be] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.007206416s
--- PASS: TestAddons/parallel/Headlamp (12.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-fcz72" [7f47cdb7-1d66-4fd6-b8da-bd25f3ed6a30] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004759401s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-744000
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-744000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-744000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9b2edd4c-c3ad-49e2-85f8-bc9724484634] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9b2edd4c-c3ad-49e2-85f8-bc9724484634] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9b2edd4c-c3ad-49e2-85f8-bc9724484634] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004108259s
addons_test.go:891: (dbg) Run:  kubectl --context addons-744000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 ssh "cat /opt/local-path-provisioner/pvc-eb1057ee-a166-4783-84f5-0703b675a199_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-744000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-744000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-744000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.182947194s)
--- PASS: TestAddons/parallel/LocalPath (55.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2c7sc" [52847713-f543-436f-9054-74052a342646] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00617555s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-744000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6z4ll" [582683fd-f226-4d97-8487-5ecb63334da3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005965084s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-744000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-744000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-744000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-744000: (11.002237847s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-744000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-744000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-744000
--- PASS: TestAddons/StoppedEnableDisable (11.73s)

                                                
                                    
x
+
TestErrorSpam/setup (22.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-003000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-003000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 --driver=docker : (22.228862542s)
--- PASS: TestErrorSpam/setup (22.23s)

                                                
                                    
x
+
TestErrorSpam/start (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 start --dry-run
--- PASS: TestErrorSpam/start (2.05s)

                                                
                                    
x
+
TestErrorSpam/status (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 status
--- PASS: TestErrorSpam/status (1.21s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (11.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 stop: (10.826056404s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-003000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-003000 stop
--- PASS: TestErrorSpam/stop (11.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17953-64881/.minikube/files/etc/test/nested/copy/65630/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-281000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m14.645600172s)
--- PASS: TestFunctional/serial/StartWithProxy (74.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-281000 --alsologtostderr -v=8: (38.065252787s)
functional_test.go:659: soft start took 38.065702886s for "functional-281000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-281000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.1: (1.238554793s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.3
E0115 05:09:54.159825   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:09:54.166532   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:09:54.176739   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.3: (1.16246162s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:latest
E0115 05:09:54.196965   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:09:54.237311   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:09:54.317947   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:09:54.478082   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:09:54.798256   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:latest: (1.076423029s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2703414877/001
E0115 05:09:55.464380   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add minikube-local-cache-test:functional-281000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add minikube-local-cache-test:functional-281000: (1.063716086s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache delete minikube-local-cache-test:functional-281000
E0115 05:09:56.745416   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-281000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (388.939861ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0115 05:09:59.305597   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 kubectl -- --context functional-281000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-281000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0115 05:10:04.426431   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:10:14.666557   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
E0115 05:10:35.147650   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-281000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.770963628s)
functional_test.go:757: restart took 36.77110631s for "functional-281000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-281000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 logs: (2.99744824s)
--- PASS: TestFunctional/serial/LogsCmd (3.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3926302698/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3926302698/001/logs.txt: (3.074876988s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-281000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-281000: exit status 115 (562.443554ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30109 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-281000 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-281000 delete -f testdata/invalidsvc.yaml: (1.060266878s)
--- PASS: TestFunctional/serial/InvalidService (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 config get cpus: exit status 14 (62.326961ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 config get cpus: exit status 14 (63.753737ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-281000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-281000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 67945: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (985.634841ms)

                                                
                                                
-- stdout --
	* [functional-281000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:12:17.064863   67810 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:12:17.065141   67810 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:12:17.065147   67810 out.go:309] Setting ErrFile to fd 2...
	I0115 05:12:17.065151   67810 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:12:17.065439   67810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:12:17.067525   67810 out.go:303] Setting JSON to false
	I0115 05:12:17.095455   67810 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":31080,"bootTime":1705293257,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:12:17.095586   67810 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:12:17.117731   67810 out.go:177] * [functional-281000] minikube v1.32.0 on Darwin 14.2.1
	I0115 05:12:17.181362   67810 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 05:12:17.160215   67810 notify.go:220] Checking for updates...
	I0115 05:12:17.223133   67810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:12:17.265389   67810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:12:17.307337   67810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:12:17.349204   67810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 05:12:17.391318   67810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 05:12:17.412874   67810 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:12:17.413631   67810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:12:17.491366   67810 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:12:17.491557   67810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:12:17.686293   67810 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-15 13:12:17.64446439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:12:17.770274   67810 out.go:177] * Using the docker driver based on existing profile
	I0115 05:12:17.812490   67810 start.go:298] selected driver: docker
	I0115 05:12:17.812505   67810 start.go:902] validating driver "docker" against &{Name:functional-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-281000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:12:17.812592   67810 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 05:12:17.875489   67810 out.go:177] 
	W0115 05:12:17.917602   67810 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 05:12:17.938673   67810 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (737.919714ms)

                                                
                                                
-- stdout --
	* [functional-281000] minikube v1.32.0 sur Darwin 14.2.1
	  - MINIKUBE_LOCATION=17953
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 05:12:18.882922   67896 out.go:296] Setting OutFile to fd 1 ...
	I0115 05:12:18.883121   67896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:12:18.883127   67896 out.go:309] Setting ErrFile to fd 2...
	I0115 05:12:18.883131   67896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 05:12:18.883323   67896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
	I0115 05:12:18.884890   67896 out.go:303] Setting JSON to false
	I0115 05:12:18.907458   67896 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":31081,"bootTime":1705293257,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0115 05:12:18.907574   67896 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0115 05:12:18.929271   67896 out.go:177] * [functional-281000] minikube v1.32.0 sur Darwin 14.2.1
	I0115 05:12:18.972259   67896 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 05:12:18.972287   67896 notify.go:220] Checking for updates...
	I0115 05:12:18.994112   67896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig
	I0115 05:12:19.036110   67896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0115 05:12:19.112096   67896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 05:12:19.172051   67896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube
	I0115 05:12:19.216036   67896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 05:12:19.240697   67896 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0115 05:12:19.241592   67896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 05:12:19.304080   67896 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0115 05:12:19.304254   67896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 05:12:19.416433   67896 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-15 13:12:19.404639879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0115 05:12:19.438384   67896 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0115 05:12:19.459307   67896 start.go:298] selected driver: docker
	I0115 05:12:19.459349   67896 start.go:902] validating driver "docker" against &{Name:functional-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-281000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 05:12:19.459508   67896 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 05:12:19.484957   67896 out.go:177] 
	W0115 05:12:19.506239   67896 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 05:12:19.527252   67896 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8dbbaa69-2630-4b9b-9117-63ff778a32ae] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004911301s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-281000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-281000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9165f124-25ff-499e-ab08-1c04414789e1] Pending
helpers_test.go:344: "sp-pod" [9165f124-25ff-499e-ab08-1c04414789e1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9165f124-25ff-499e-ab08-1c04414789e1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004707572s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-281000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-281000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-281000 delete -f testdata/storage-provisioner/pod.yaml: (1.12739355s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e4395629-05c9-4857-9d4e-a3a15e1d0daa] Pending
helpers_test.go:344: "sp-pod" [e4395629-05c9-4857-9d4e-a3a15e1d0daa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e4395629-05c9-4857-9d4e-a3a15e1d0daa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00687611s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-281000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -n functional-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cp functional-281000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3316246698/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -n functional-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -n functional-281000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-281000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-97wvw" [7f3c76a3-58ab-43b6-93cd-b410e93a0bf3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-97wvw" [7f3c76a3-58ab-43b6-93cd-b410e93a0bf3] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.003365626s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-281000 exec mysql-859648c796-97wvw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-281000 exec mysql-859648c796-97wvw -- mysql -ppassword -e "show databases;": exit status 1 (155.830517ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-281000 exec mysql-859648c796-97wvw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-281000 exec mysql-859648c796-97wvw -- mysql -ppassword -e "show databases;": exit status 1 (115.723501ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-281000 exec mysql-859648c796-97wvw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/65630/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/test/nested/copy/65630/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/65630.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/65630.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/65630.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /usr/share/ca-certificates/65630.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/656302.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/656302.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/656302.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /usr/share/ca-certificates/656302.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-281000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "sudo systemctl is-active crio": exit status 1 (460.644572ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 version -o=json --components: (1.302846962s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-281000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-281000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-281000 image ls --format short --alsologtostderr:
I0115 05:12:31.707948   68069 out.go:296] Setting OutFile to fd 1 ...
I0115 05:12:31.708283   68069 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:31.708290   68069 out.go:309] Setting ErrFile to fd 2...
I0115 05:12:31.708294   68069 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:31.708494   68069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
I0115 05:12:31.709148   68069 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:31.709242   68069 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:31.709680   68069 cli_runner.go:164] Run: docker container inspect functional-281000 --format={{.State.Status}}
I0115 05:12:31.764353   68069 ssh_runner.go:195] Run: systemctl --version
I0115 05:12:31.764437   68069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-281000
I0115 05:12:31.818546   68069 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54211 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/functional-281000/id_rsa Username:docker}
I0115 05:12:31.910777   68069 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/google-containers/addon-resizer      | functional-281000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-281000 | 75b227816c423 | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-281000 | bba410cc3e782 | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| docker.io/library/nginx                     | latest            | a8758716bb6aa | 187MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-281000 image ls --format table --alsologtostderr:
I0115 05:12:35.273273   68119 out.go:296] Setting OutFile to fd 1 ...
I0115 05:12:35.273495   68119 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:35.273502   68119 out.go:309] Setting ErrFile to fd 2...
I0115 05:12:35.273506   68119 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:35.273693   68119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
I0115 05:12:35.274308   68119 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:35.274395   68119 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:35.274778   68119 cli_runner.go:164] Run: docker container inspect functional-281000 --format={{.State.Status}}
I0115 05:12:35.327519   68119 ssh_runner.go:195] Run: systemctl --version
I0115 05:12:35.327595   68119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-281000
I0115 05:12:35.380693   68119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54211 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/functional-281000/id_rsa Username:docker}
I0115 05:12:35.472075   68119 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-281000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109
c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"bba410cc3e782a35bead6b15f9b9414ef6da0a332a91ae920810925fb2ed6437","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-281000"],"size":"1240000"},{"id":"75b227816c4238a9a64f10eae233ae24e3f41f6c241c8d38a31c1f933cfe041d","repoD
igests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-281000"],"size":"30"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"si
ze":"501000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-281000 image ls --format json --alsologtostderr:
I0115 05:12:34.970520   68113 out.go:296] Setting OutFile to fd 1 ...
I0115 05:12:34.970752   68113 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:34.970758   68113 out.go:309] Setting ErrFile to fd 2...
I0115 05:12:34.970762   68113 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:34.970956   68113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
I0115 05:12:34.971602   68113 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:34.971762   68113 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:34.972255   68113 cli_runner.go:164] Run: docker container inspect functional-281000 --format={{.State.Status}}
I0115 05:12:35.024832   68113 ssh_runner.go:195] Run: systemctl --version
I0115 05:12:35.024908   68113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-281000
I0115 05:12:35.079230   68113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54211 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/functional-281000/id_rsa Username:docker}
I0115 05:12:35.169974   68113 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format yaml --alsologtostderr:
- id: 75b227816c4238a9a64f10eae233ae24e3f41f6c241c8d38a31c1f933cfe041d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-281000
size: "30"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-281000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-281000 image ls --format yaml --alsologtostderr:
I0115 05:12:32.017458   68075 out.go:296] Setting OutFile to fd 1 ...
I0115 05:12:32.017810   68075 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:32.017817   68075 out.go:309] Setting ErrFile to fd 2...
I0115 05:12:32.017821   68075 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:32.018012   68075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
I0115 05:12:32.018611   68075 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:32.018703   68075 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:32.019084   68075 cli_runner.go:164] Run: docker container inspect functional-281000 --format={{.State.Status}}
I0115 05:12:32.074458   68075 ssh_runner.go:195] Run: systemctl --version
I0115 05:12:32.074531   68075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-281000
I0115 05:12:32.127118   68075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54211 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/functional-281000/id_rsa Username:docker}
I0115 05:12:32.218578   68075 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh pgrep buildkitd: exit status 1 (370.908205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build --alsologtostderr: (1.973169125s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build --alsologtostderr:
I0115 05:12:32.694445   68091 out.go:296] Setting OutFile to fd 1 ...
I0115 05:12:32.695405   68091 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:32.695411   68091 out.go:309] Setting ErrFile to fd 2...
I0115 05:12:32.695416   68091 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 05:12:32.695609   68091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17953-64881/.minikube/bin
I0115 05:12:32.696225   68091 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:32.696821   68091 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0115 05:12:32.697322   68091 cli_runner.go:164] Run: docker container inspect functional-281000 --format={{.State.Status}}
I0115 05:12:32.751247   68091 ssh_runner.go:195] Run: systemctl --version
I0115 05:12:32.751332   68091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-281000
I0115 05:12:32.804731   68091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54211 SSHKeyPath:/Users/jenkins/minikube-integration/17953-64881/.minikube/machines/functional-281000/id_rsa Username:docker}
I0115 05:12:32.895414   68091 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1356843786.tar
I0115 05:12:32.895499   68091 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 05:12:32.903735   68091 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1356843786.tar
I0115 05:12:32.907864   68091 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1356843786.tar: stat -c "%s %y" /var/lib/minikube/build/build.1356843786.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1356843786.tar': No such file or directory
I0115 05:12:32.907895   68091 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1356843786.tar --> /var/lib/minikube/build/build.1356843786.tar (3072 bytes)
I0115 05:12:32.929105   68091 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1356843786
I0115 05:12:32.938565   68091 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1356843786 -xf /var/lib/minikube/build/build.1356843786.tar
I0115 05:12:32.947880   68091 docker.go:360] Building image: /var/lib/minikube/build/build.1356843786
I0115 05:12:32.947960   68091 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-281000 /var/lib/minikube/build/build.1356843786
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:bba410cc3e782a35bead6b15f9b9414ef6da0a332a91ae920810925fb2ed6437 done
#8 naming to localhost/my-image:functional-281000 done
#8 DONE 0.0s
I0115 05:12:34.563946   68091 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-281000 /var/lib/minikube/build/build.1356843786: (1.61605134s)
I0115 05:12:34.564024   68091 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1356843786
I0115 05:12:34.573388   68091 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1356843786.tar
I0115 05:12:34.581707   68091 build_images.go:207] Built localhost/my-image:functional-281000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1356843786.tar
I0115 05:12:34.581732   68091 build_images.go:123] succeeded building to: functional-281000
I0115 05:12:34.581737   68091 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.369279676s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-281000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-281000 docker-env) && out/minikube-darwin-amd64 status -p functional-281000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-281000 docker-env) && out/minikube-darwin-amd64 status -p functional-281000": (1.30008297s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-281000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr: (3.791102629s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr: (2.171389878s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.129820284s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr: (4.015616642s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image save gcr.io/google-containers/addon-resizer:functional-281000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image save gcr.io/google-containers/addon-resizer:functional-281000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.828349003s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image rm gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.419780407s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image save --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image save --daemon gcr.io/google-containers/addon-resizer:functional-281000 --alsologtostderr: (1.553223876s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-281000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-281000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-281000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-fxnw9" [4e4bb4a0-0a66-4373-88fc-5592668c37b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0115 05:11:16.107419   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-d7447cc7f-fxnw9" [4e4bb4a0-0a66-4373-88fc-5592668c37b5] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.005305002s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service list -o json
functional_test.go:1493: Took "490.543995ms" to run "out/minikube-darwin-amd64 -p functional-281000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 service --namespace=default --https --url hello-node: signal: killed (15.004931288s)

                                                
                                                
-- stdout --
	https://127.0.0.1:54458

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:54458
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67655: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9dc00795-d225-4c26-af8c-c098acefebee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9dc00795-d225-4c26-af8c-c098acefebee] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005158199s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-281000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67690: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 service hello-node --url --format={{.IP}}: signal: killed (15.004630782s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 service hello-node --url: signal: killed (15.004269652s)

                                                
                                                
-- stdout --
	http://127.0.0.1:54527

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:54527
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "404.961856ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "81.58727ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "398.896789ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "80.753816ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3389037560/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3389037560/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3389037560/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T" /mount1: exit status 1 (516.548475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-281000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3389037560/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3389037560/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3389037560/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-281000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-281000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-281000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-371000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-371000 --driver=docker : (22.007911472s)
--- PASS: TestImageBuild/serial/Setup (22.01s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-371000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-371000: (1.769089945s)
--- PASS: TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-371000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-371000: (1.292792279s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-371000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-371000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-376000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0115 05:20:56.737254   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0115 05:21:24.430668   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-376000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (37.319660234s)
--- PASS: TestJSONOutput/start/Command (37.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-376000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-376000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-376000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-376000 --output=json --user=testUser: (10.953685507s)
--- PASS: TestJSONOutput/stop/Command (10.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-302000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-302000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (390.339287ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a056955e-822b-4404-b993-074c9fb1f611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-302000] minikube v1.32.0 on Darwin 14.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c368f636-1242-42c2-8d9a-15651e66c285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"407d2592-3f16-46f7-97cd-7b9d63b7bc56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17953-64881/kubeconfig"}}
	{"specversion":"1.0","id":"8b660956-43de-41df-9dcc-15494353d8d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"9dc9166a-cc36-4907-a182-3228ebcb9dd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8f44ec02-32d9-44d5-9ab8-05a45f41931f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17953-64881/.minikube"}}
	{"specversion":"1.0","id":"d1491e96-6a46-4acf-a36d-53ad462db62b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3c1c97d3-ec44-47db-943c-2f9aa3864695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-302000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-302000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-296000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-296000 --network=: (21.551573112s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-296000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-296000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-296000: (2.375975653s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-834000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-834000 --network=bridge: (22.077330062s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-834000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-834000: (2.244997553s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.38s)

                                                
                                    
x
+
TestKicExistingNetwork (23.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-386000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-386000 --network=existing-network: (20.71626646s)
helpers_test.go:175: Cleaning up "existing-network-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-386000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-386000: (2.243431293s)
--- PASS: TestKicExistingNetwork (23.31s)

                                                
                                    
x
+
TestKicCustomSubnet (23.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-719000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-719000 --subnet=192.168.60.0/24: (21.436026959s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-719000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-719000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-719000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-719000: (2.392198854s)
--- PASS: TestKicCustomSubnet (23.88s)

                                                
                                    
x
+
TestKicStaticIP (24.21s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-166000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-166000 --static-ip=192.168.200.200: (21.556552573s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-166000 ip
helpers_test.go:175: Cleaning up "static-ip-166000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-166000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-166000: (2.415448757s)
--- PASS: TestKicStaticIP (24.21s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (51.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-663000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-663000 --driver=docker : (21.648697042s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-665000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-665000 --driver=docker : (23.349374136s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-663000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-665000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-665000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-665000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-665000: (2.531878327s)
helpers_test.go:175: Cleaning up "first-663000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-663000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-663000: (2.465421114s)
--- PASS: TestMinikubeProfile (51.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-434000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-434000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.672779095s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-434000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-447000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-447000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.2096198s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-447000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-434000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-434000 --alsologtostderr -v=5: (2.069012691s)
--- PASS: TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-447000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-447000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-447000: (1.557882065s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-447000
E0115 05:24:53.985478   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-447000: (7.403776877s)
--- PASS: TestMountStart/serial/RestartStopped (8.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-447000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestPreload (134.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0115 05:55:56.713737   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/functional-281000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m9.735567081s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-854000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-854000 image pull gcr.io/k8s-minikube/busybox: (1.372681515s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-854000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-854000: (10.910920853s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-854000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-854000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (49.3753922s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-854000 image list
helpers_test.go:175: Cleaning up "test-preload-854000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-854000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-854000: (2.481357826s)
--- PASS: TestPreload (134.18s)

                                                
                                    

Test skip (21/197)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 16.399189ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-59vjh" [76a313e9-9d1b-4dc6-a912-6ac6315e81fa] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003929856s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kvdtv" [a69067aa-8e4d-44c9-a581-c18f877224b2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005854013s
addons_test.go:340: (dbg) Run:  kubectl --context addons-744000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-744000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-744000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.705010015s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-744000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-744000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-744000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2ed6e4b4-5a1b-4cff-bbc7-83c3b67d93c5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2ed6e4b4-5a1b-4cff-bbc7-83c3b67d93c5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004520268s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-744000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.39s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-281000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-281000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-6vdjl" [44379a8f-b0c5-4e6f-af42-b528c41f40c1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-6vdjl" [44379a8f-b0c5-4e6f-af42-b528c41f40c1] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004842341s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2694157381/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705324336987582000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2694157381/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705324336987582000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2694157381/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705324336987582000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2694157381/001/test-1705324336987582000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (607.350192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (528.665865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (489.424856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (394.649894ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.423408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (405.729962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.064627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p": exit status 1 (386.612166ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-amd64 -p functional-281000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2694157381/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2163593362/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
2024/01/15 05:12:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (623.354184ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (443.538992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (446.434202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.960815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.636562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0115 05:12:37.894258   65630 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17953-64881/.minikube/profiles/addons-744000/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.590936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (358.523328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p": exit status 1 (367.053232ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-281000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2163593362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.53s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard