Test Report: Docker_macOS 17761

                    
                      4145ffc8c3ff629bd64b588eb0db70699e9f5232:2023-12-12:32257
                    
                

Test fail (26/189)

x
+
TestOffline (759.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-053000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-053000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m38.098741267s)

                                                
                                                
-- stdout --
	* [offline-docker-053000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-053000 in cluster offline-docker-053000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:30:33.295190    8553 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:30:33.295397    8553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:30:33.295404    8553 out.go:309] Setting ErrFile to fd 2...
	I1212 15:30:33.295409    8553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:30:33.295596    8553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:30:33.297146    8553 out.go:303] Setting JSON to false
	I1212 15:30:33.320911    8553 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5403,"bootTime":1702418430,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 15:30:33.321028    8553 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:30:33.342345    8553 out.go:177] * [offline-docker-053000] minikube v1.32.0 on Darwin 14.2
	I1212 15:30:33.384137    8553 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 15:30:33.384154    8553 notify.go:220] Checking for updates...
	I1212 15:30:33.441122    8553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 15:30:33.483971    8553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:30:33.505160    8553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:30:33.526078    8553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 15:30:33.547004    8553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:30:33.568232    8553 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:30:33.624590    8553 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 15:30:33.624779    8553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:30:33.755760    8553 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-12-12 23:30:33.746203654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:30:33.797981    8553 out.go:177] * Using the docker driver based on user configuration
	I1212 15:30:33.819192    8553 start.go:298] selected driver: docker
	I1212 15:30:33.819220    8553 start.go:902] validating driver "docker" against <nil>
	I1212 15:30:33.819240    8553 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:30:33.823475    8553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:30:33.923168    8553 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-12-12 23:30:33.913921844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:30:33.923359    8553 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:30:33.923543    8553 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 15:30:33.944949    8553 out.go:177] * Using Docker Desktop driver with root privileges
	I1212 15:30:33.966312    8553 cni.go:84] Creating CNI manager for ""
	I1212 15:30:33.966356    8553 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:30:33.966376    8553 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 15:30:33.966393    8553 start_flags.go:323] config:
	{Name:offline-docker-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:30:34.031235    8553 out.go:177] * Starting control plane node offline-docker-053000 in cluster offline-docker-053000
	I1212 15:30:34.074171    8553 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 15:30:34.115975    8553 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 15:30:34.158010    8553 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:30:34.158058    8553 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:30:34.158067    8553 cache.go:56] Caching tarball of preloaded images
	I1212 15:30:34.158084    8553 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 15:30:34.158185    8553 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:30:34.158196    8553 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:30:34.159035    8553 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/offline-docker-053000/config.json ...
	I1212 15:30:34.159124    8553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/offline-docker-053000/config.json: {Name:mk70bf8d815ba08dd6bd3262a85df04cd34bd91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:30:34.280382    8553 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 15:30:34.280403    8553 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 15:30:34.280433    8553 cache.go:194] Successfully downloaded all kic artifacts
	I1212 15:30:34.280518    8553 start.go:365] acquiring machines lock for offline-docker-053000: {Name:mk8638403286bf469c9eeddadcb0d892cf68e897 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:30:34.280756    8553 start.go:369] acquired machines lock for "offline-docker-053000" in 220.56µs
	I1212 15:30:34.280811    8553 start.go:93] Provisioning new machine with config: &{Name:offline-docker-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-053000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:30:34.281346    8553 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:30:34.323976    8553 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:30:34.324225    8553 start.go:159] libmachine.API.Create for "offline-docker-053000" (driver="docker")
	I1212 15:30:34.324250    8553 client.go:168] LocalClient.Create starting
	I1212 15:30:34.324354    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:30:34.324401    8553 main.go:141] libmachine: Decoding PEM data...
	I1212 15:30:34.324419    8553 main.go:141] libmachine: Parsing certificate...
	I1212 15:30:34.324493    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:30:34.324528    8553 main.go:141] libmachine: Decoding PEM data...
	I1212 15:30:34.324535    8553 main.go:141] libmachine: Parsing certificate...
	I1212 15:30:34.325067    8553 cli_runner.go:164] Run: docker network inspect offline-docker-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:30:34.424316    8553 cli_runner.go:211] docker network inspect offline-docker-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:30:34.424417    8553 network_create.go:281] running [docker network inspect offline-docker-053000] to gather additional debugging logs...
	I1212 15:30:34.424440    8553 cli_runner.go:164] Run: docker network inspect offline-docker-053000
	W1212 15:30:34.475673    8553 cli_runner.go:211] docker network inspect offline-docker-053000 returned with exit code 1
	I1212 15:30:34.475723    8553 network_create.go:284] error running [docker network inspect offline-docker-053000]: docker network inspect offline-docker-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-053000 not found
	I1212 15:30:34.475735    8553 network_create.go:286] output of [docker network inspect offline-docker-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-053000 not found
	
	** /stderr **
	I1212 15:30:34.475860    8553 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:30:34.528282    8553 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:30:34.528683    8553 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e2cf0}
	I1212 15:30:34.528703    8553 network_create.go:124] attempt to create docker network offline-docker-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1212 15:30:34.528793    8553 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-053000 offline-docker-053000
	W1212 15:30:34.580221    8553 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-053000 offline-docker-053000 returned with exit code 1
	W1212 15:30:34.580261    8553 network_create.go:149] failed to create docker network offline-docker-053000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-053000 offline-docker-053000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1212 15:30:34.580281    8553 network_create.go:116] failed to create docker network offline-docker-053000 192.168.58.0/24, will retry: subnet is taken
	I1212 15:30:34.581890    8553 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:30:34.582287    8553 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002067bc0}
	I1212 15:30:34.582322    8553 network_create.go:124] attempt to create docker network offline-docker-053000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1212 15:30:34.582387    8553 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-053000 offline-docker-053000
	I1212 15:30:34.669721    8553 network_create.go:108] docker network offline-docker-053000 192.168.67.0/24 created
	I1212 15:30:34.669771    8553 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-053000" container
	I1212 15:30:34.669887    8553 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:30:34.723677    8553 cli_runner.go:164] Run: docker volume create offline-docker-053000 --label name.minikube.sigs.k8s.io=offline-docker-053000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:30:34.775824    8553 oci.go:103] Successfully created a docker volume offline-docker-053000
	I1212 15:30:34.775943    8553 cli_runner.go:164] Run: docker run --rm --name offline-docker-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-053000 --entrypoint /usr/bin/test -v offline-docker-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:30:35.345431    8553 oci.go:107] Successfully prepared a docker volume offline-docker-053000
	I1212 15:30:35.345492    8553 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:30:35.345506    8553 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:30:35.345611    8553 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:36:34.438806    8553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:36:34.439000    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:34.494801    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:36:34.494941    8553 retry.go:31] will retry after 366.711187ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:34.863430    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:34.918383    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:36:34.918502    8553 retry.go:31] will retry after 294.758537ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:35.213438    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:35.267317    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:36:35.267436    8553 retry.go:31] will retry after 422.739874ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:35.692534    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:35.746711    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	W1212 15:36:35.746827    8553 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	
	W1212 15:36:35.746862    8553 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:35.746937    8553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:36:35.747019    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:35.798567    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:36:35.798665    8553 retry.go:31] will retry after 180.853595ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:35.981817    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:36.034369    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:36:36.034459    8553 retry.go:31] will retry after 516.483883ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:36.551355    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:36.603183    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:36:36.603276    8553 retry.go:31] will retry after 730.871154ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:37.334496    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:36:37.388227    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	W1212 15:36:37.388332    8553 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	
	W1212 15:36:37.388352    8553 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:37.388371    8553 start.go:128] duration metric: createHost completed in 6m2.992852989s
	I1212 15:36:37.388377    8553 start.go:83] releasing machines lock for "offline-docker-053000", held for 6m2.993473588s
	W1212 15:36:37.388393    8553 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1212 15:36:37.388856    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:37.440206    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:37.440279    8553 delete.go:82] Unable to get host status for offline-docker-053000, assuming it has already been deleted: state: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	W1212 15:36:37.440360    8553 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1212 15:36:37.440374    8553 start.go:709] Will try again in 5 seconds ...
	I1212 15:36:42.441155    8553 start.go:365] acquiring machines lock for offline-docker-053000: {Name:mk8638403286bf469c9eeddadcb0d892cf68e897 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:36:42.441262    8553 start.go:369] acquired machines lock for "offline-docker-053000" in 79.501µs
	I1212 15:36:42.441285    8553 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:36:42.441293    8553 fix.go:54] fixHost starting: 
	I1212 15:36:42.441614    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:42.491790    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:42.491839    8553 fix.go:102] recreateIfNeeded on offline-docker-053000: state= err=unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:42.491857    8553 fix.go:107] machineExists: false. err=machine does not exist
	I1212 15:36:42.513262    8553 out.go:177] * docker "offline-docker-053000" container is missing, will recreate.
	I1212 15:36:42.557274    8553 delete.go:124] DEMOLISHING offline-docker-053000 ...
	I1212 15:36:42.557481    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:42.609579    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	W1212 15:36:42.609641    8553 stop.go:75] unable to get state: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:42.609661    8553 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:42.610037    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:42.660397    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:42.660451    8553 delete.go:82] Unable to get host status for offline-docker-053000, assuming it has already been deleted: state: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:42.660534    8553 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-053000
	W1212 15:36:42.712428    8553 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-053000 returned with exit code 1
	I1212 15:36:42.712465    8553 kic.go:371] could not find the container offline-docker-053000 to remove it. will try anyways
	I1212 15:36:42.712541    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:42.763764    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	W1212 15:36:42.763832    8553 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:42.763910    8553 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-053000 /bin/bash -c "sudo init 0"
	W1212 15:36:42.814690    8553 cli_runner.go:211] docker exec --privileged -t offline-docker-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 15:36:42.814727    8553 oci.go:650] error shutdown offline-docker-053000: docker exec --privileged -t offline-docker-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:43.815230    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:43.868368    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:43.868426    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:43.868439    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:43.868459    8553 retry.go:31] will retry after 549.094799ms: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:44.417939    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:44.470423    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:44.470487    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:44.470507    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:44.470536    8553 retry.go:31] will retry after 655.885628ms: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:45.126928    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:45.180305    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:45.180354    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:45.180370    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:45.180397    8553 retry.go:31] will retry after 1.056564396s: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:46.237388    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:46.291249    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:46.291294    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:46.291303    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:46.291331    8553 retry.go:31] will retry after 1.993631867s: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:48.285385    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:48.336987    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:48.337039    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:48.337050    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:48.337074    8553 retry.go:31] will retry after 2.824879915s: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:51.166565    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:51.220454    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:51.220511    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:51.220522    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:51.220547    8553 retry.go:31] will retry after 5.029706117s: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:56.251343    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:36:56.306940    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:36:56.306993    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:36:56.307007    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:36:56.307029    8553 retry.go:31] will retry after 7.236796715s: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:37:03.546356    8553 cli_runner.go:164] Run: docker container inspect offline-docker-053000 --format={{.State.Status}}
	W1212 15:37:03.600038    8553 cli_runner.go:211] docker container inspect offline-docker-053000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:03.600098    8553 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:37:03.600112    8553 oci.go:664] temporary error: container offline-docker-053000 status is  but expect it to be exited
	I1212 15:37:03.600144    8553 oci.go:88] couldn't shut down offline-docker-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	 
	I1212 15:37:03.600246    8553 cli_runner.go:164] Run: docker rm -f -v offline-docker-053000
	I1212 15:37:03.653314    8553 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-053000
	W1212 15:37:03.706769    8553 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-053000 returned with exit code 1
	I1212 15:37:03.706887    8553 cli_runner.go:164] Run: docker network inspect offline-docker-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:37:03.759816    8553 cli_runner.go:164] Run: docker network rm offline-docker-053000
	I1212 15:37:03.881425    8553 fix.go:114] Sleeping 1 second for extra luck!
	I1212 15:37:04.882537    8553 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:37:04.904087    8553 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:37:04.904242    8553 start.go:159] libmachine.API.Create for "offline-docker-053000" (driver="docker")
	I1212 15:37:04.904271    8553 client.go:168] LocalClient.Create starting
	I1212 15:37:04.904505    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:37:04.904818    8553 main.go:141] libmachine: Decoding PEM data...
	I1212 15:37:04.904846    8553 main.go:141] libmachine: Parsing certificate...
	I1212 15:37:04.904937    8553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:37:04.905139    8553 main.go:141] libmachine: Decoding PEM data...
	I1212 15:37:04.905154    8553 main.go:141] libmachine: Parsing certificate...
	I1212 15:37:04.926438    8553 cli_runner.go:164] Run: docker network inspect offline-docker-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:37:04.981001    8553 cli_runner.go:211] docker network inspect offline-docker-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:37:04.981106    8553 network_create.go:281] running [docker network inspect offline-docker-053000] to gather additional debugging logs...
	I1212 15:37:04.981126    8553 cli_runner.go:164] Run: docker network inspect offline-docker-053000
	W1212 15:37:05.092300    8553 cli_runner.go:211] docker network inspect offline-docker-053000 returned with exit code 1
	I1212 15:37:05.092333    8553 network_create.go:284] error running [docker network inspect offline-docker-053000]: docker network inspect offline-docker-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-053000 not found
	I1212 15:37:05.092375    8553 network_create.go:286] output of [docker network inspect offline-docker-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-053000 not found
	
	** /stderr **
	I1212 15:37:05.092557    8553 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:37:05.147821    8553 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:05.149420    8553 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:05.151007    8553 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:05.152681    8553 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:05.153163    8553 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022e3570}
	I1212 15:37:05.153177    8553 network_create.go:124] attempt to create docker network offline-docker-053000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1212 15:37:05.153260    8553 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-053000 offline-docker-053000
	I1212 15:37:05.248787    8553 network_create.go:108] docker network offline-docker-053000 192.168.85.0/24 created
	I1212 15:37:05.248973    8553 kic.go:121] calculated static IP "192.168.85.2" for the "offline-docker-053000" container
	I1212 15:37:05.249089    8553 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:37:05.305274    8553 cli_runner.go:164] Run: docker volume create offline-docker-053000 --label name.minikube.sigs.k8s.io=offline-docker-053000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:37:05.358631    8553 oci.go:103] Successfully created a docker volume offline-docker-053000
	I1212 15:37:05.358765    8553 cli_runner.go:164] Run: docker run --rm --name offline-docker-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-053000 --entrypoint /usr/bin/test -v offline-docker-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:37:05.697868    8553 oci.go:107] Successfully prepared a docker volume offline-docker-053000
	I1212 15:37:05.697904    8553 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:37:05.697918    8553 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:37:05.698024    8553 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:43:04.915943    8553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:43:04.917510    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:04.971048    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:04.971184    8553 retry.go:31] will retry after 215.495959ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:05.188903    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:05.240024    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:05.240146    8553 retry.go:31] will retry after 387.958638ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:05.628559    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:05.680811    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:05.680926    8553 retry.go:31] will retry after 331.504211ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:06.013388    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:06.065714    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:06.065816    8553 retry.go:31] will retry after 438.745665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:06.506497    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:06.557918    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	W1212 15:43:06.558029    8553 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	
	W1212 15:43:06.558053    8553 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:06.558117    8553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:43:06.558173    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:06.609966    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:06.610069    8553 retry.go:31] will retry after 149.531992ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:06.760290    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:06.817929    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:06.818034    8553 retry.go:31] will retry after 480.765968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:07.299275    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:07.352874    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:07.352972    8553 retry.go:31] will retry after 724.21971ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:08.077725    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:08.130369    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	W1212 15:43:08.130484    8553 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	
	W1212 15:43:08.130505    8553 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:08.130516    8553 start.go:128] duration metric: createHost completed in 6m3.237035826s
	I1212 15:43:08.130579    8553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:43:08.130635    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:08.181837    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:08.181940    8553 retry.go:31] will retry after 326.934914ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:08.509667    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:08.562720    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:08.562812    8553 retry.go:31] will retry after 393.248571ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:08.957101    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:09.008813    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:09.008911    8553 retry.go:31] will retry after 489.349635ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:09.498612    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:09.550768    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	W1212 15:43:09.550887    8553 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	
	W1212 15:43:09.550910    8553 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:09.550975    8553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:43:09.551033    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:09.601977    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:09.602069    8553 retry.go:31] will retry after 158.435903ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:09.760912    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:09.816410    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:09.816509    8553 retry.go:31] will retry after 547.398341ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:10.365695    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:10.418248    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	I1212 15:43:10.418339    8553 retry.go:31] will retry after 822.412011ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:11.241753    8553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000
	W1212 15:43:11.294619    8553 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000 returned with exit code 1
	W1212 15:43:11.294728    8553 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	
	W1212 15:43:11.294744    8553 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000
	I1212 15:43:11.294756    8553 fix.go:56] fixHost completed within 6m28.841798461s
	I1212 15:43:11.294762    8553 start.go:83] releasing machines lock for "offline-docker-053000", held for 6m28.841828088s
	W1212 15:43:11.294835    8553 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 15:43:11.315786    8553 out.go:177] 
	W1212 15:43:11.358755    8553 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 15:43:11.358804    8553 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1212 15:43:11.358838    8553 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1212 15:43:11.380673    8553 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-053000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-12-12 15:43:11.478223 -0800 PST m=+6037.594674817
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-053000
helpers_test.go:235: (dbg) docker inspect offline-docker-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-053000",
	        "Id": "ef283686818d1dceff7f0ecb0d4a6d4711751829967f1b8432fdccf93d1bfe00",
	        "Created": "2023-12-12T23:37:05.202614862Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-053000 -n offline-docker-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-053000 -n offline-docker-053000: exit status 7 (117.398018ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:43:11.650113    9106 status.go:249] status error: host: state: unknown state "offline-docker-053000": docker container inspect offline-docker-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-053000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-053000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-053000
--- FAIL: TestOffline (759.02s)

                                                
                                    
x
+
TestCertOptions (7200.746s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-415000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1212 15:57:16.053458    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:57:25.842383    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:57:42.790204    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 16:02:16.059033    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (6m48s)
	TestCertOptions (6m14s)
	TestNetworkPlugins (32m1s)
	TestNetworkPlugins/group (32m1s)

                                                
                                                
goroutine 2195 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000103380, 0xc0008b1b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0004f43c0?, {0x5274ee0, 0x2a, 0x2a}, {0x10b0145?, 0xc0001900c0?, 0x52966e0?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0004f43c0)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001fe880)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1795 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002499860)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002499860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002499860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002499860, 0x3b3e2e0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 144 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000821900, 0xc0001842a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1245 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00295be40, 0xc00293d140)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1244
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 632 [syscall, 6 minutes]:
syscall.syscall6(0x10106dd?, 0x59c95b8?, 0xc0009f3717?, 0xc0009f3918?, 0x100c0009f38e0?, 0x1010000000003?, 0x59d5070?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0009f3890?, 0x1010905?, 0x90?, 0x305b340?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc000954860?, 0xc0009f38c4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0025aa120)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0006b8580)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002508000?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002508000, 0xc0006b8580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertOptions(0xc002508000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x40e
testing.tRunner(0xc002508000, 0x3b3e1f8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 924 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faf718, 0xc0001842a0}, 0xc00268ff50, 0xc002293a38?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3faf718, 0xc0001842a0}, 0x1?, 0x1?, 0xc00268ffb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faf718?, 0xc0001842a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00268ffd0?, 0x117bdc7?, 0xc002641440?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 895
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 143 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007ff980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1354 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc002c5a000)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1371
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 925 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 924
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 633 [syscall, 6 minutes]:
syscall.syscall6(0x1010585?, 0xc002087a98?, 0xc002087988?, 0xc002087ab8?, 0x100c002087a80?, 0x1000000000003?, 0x59d5070?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc002087a30?, 0x1010905?, 0x90?, 0x305b340?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0009549d0?, 0xc002087a64, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0025aa060)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0006b82c0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0025081a0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0025081a0, 0xc0006b82c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertExpiration(0xc0025081a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2d7
testing.tRunner(0xc0025081a0, 0x3b3e1f0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2193 [IO wait]:
internal/poll.runtime_pollWait(0x4ca54408, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00282c180?, 0xc00072b863?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00282c180, {0xc00072b863, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002244040, {0xc00072b863?, 0xc000a7be68?, 0xc000a7be68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002154450, {0x3f8b720, 0xc002244040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8b7a0, 0xc002154450}, {0x3f8b720, 0xc002244040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002b2a000?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 632
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1893 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0023144e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0023144e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0023144e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0023144e0, 0xc000b16900)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1870 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002508b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002508b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002508b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002508b60, 0xc000b16580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2177 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4ca54ad0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00282c120?, 0xc0009c4b07?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00282c120, {0xc0009c4b07, 0x4f9, 0x4f9})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002244010, {0xc0009c4b07?, 0xc000b7e0c0?, 0xc000a7b668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002154120, {0x3f8b720, 0xc002244010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8b7a0, 0xc002154120}, {0x3f8b720, 0xc002244010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00256e4e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 633
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 158 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008218d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f886c0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0007ff800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000821900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f8cc40, 0xc00247c060}, 0x1, 0xc0001842a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc000a7afd0?, 0x15e86a5?, 0xc0007ff980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 159 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faf718, 0xc0001842a0}, 0xc000110f50, 0x2a35385?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3faf718, 0xc0001842a0}, 0x58?, 0xc00080c630?, 0xc0004cc540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faf718?, 0xc0001842a0?}, 0xc00219cd00?, 0x11375a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1138465?, 0xc00219cd00?, 0xc000a6e600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 160 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 159
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2194 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0006b8580, 0xc002b2a540)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 632
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1883 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0026789c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0026789c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0026789c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:305 +0xb4
testing.tRunner(0xc0026789c0, 0x3b3e2c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1223 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0029e62c0, 0xc00287f2c0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1222
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1042 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022ff4a0, 0xc002377200)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1041
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1889 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000192680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000192680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000192680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000192680, 0xc000b16700)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1891 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0001929c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0001929c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001929c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0001929c0, 0xc000b16800)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1881 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002678680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002678680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc002678680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:156 +0x86
testing.tRunner(0xc002678680, 0x3b3e328)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1890 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000192820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000192820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000192820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000192820, 0xc000b16780)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1892 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002314340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002314340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002314340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002314340, 0xc000b16880)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1880 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0026784e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0026784e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0026784e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc0026784e0, 0x3b3e300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2178 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4ca547e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00282c1e0?, 0xc00072b463?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00282c1e0, {0xc00072b463, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002244028, {0xc00072b463?, 0xc000114668?, 0xc000114668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002154180, {0x3f8b720, 0xc002244028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8b7a0, 0xc002154180}, {0x3f8b720, 0xc002244028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00256e540?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 633
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 2160 [IO wait]:
internal/poll.runtime_pollWait(0x4ca545f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00282c060?, 0xc00207c2c3?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00282c060, {0xc00207c2c3, 0x53d, 0x53d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002244020, {0xc00207c2c3?, 0xc002578e68?, 0xc002578e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002154420, {0x3f8b720, 0xc002244020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f8b7a0, 0xc002154420}, {0x3f8b720, 0xc002244020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002c4b320?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 632
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1794 [chan receive, 34 minutes]:
testing.(*T).Run(0xc002498b60, {0x30ed5f6?, 0x3f29eb176f0?}, 0xc00221a288)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002498b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002498b60, 0x3b3e2d8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1882 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002678820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002678820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002678820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:228 +0x39
testing.tRunner(0xc002678820, 0x3b3e2a8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2179 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0006b82c0, 0xc002b2a4e0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 633
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 697 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x4ca54bc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0025c6400?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0025c6400)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0025c6400)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0025b6800)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc0025b6800)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0006a74a0, {0x3fa2d20, 0xc0025b6800})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0006a74a0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc002615860?, 0xc002615860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 694
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 1872 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000523520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000523520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000523520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000523520, 0xc000b16680)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1353 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc002c5a000)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1371
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 1311 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a491e0, 0xc002b2a060)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 821
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1869 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0024989c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0024989c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0024989c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0024989c0, 0xc000b16400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1796 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002499a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002499a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002499a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002499a00, 0x3b3e2f0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1868 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0024981a0, 0xc00221a288)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1794
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1874 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000102ea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000102ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc002293980?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000102ea0, 0x3b3e320)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1871 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000aae5a0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000523040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000523040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000523040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000523040, 0xc000b16600)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1868
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 923 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc0023e0f50, 0x2b)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f886c0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002648120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0023e0f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00268b788?, {0x3f8cc40, 0xc002606b40}, 0x1, 0xc0001842a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00256fb60?, 0x3b9aca00, 0x0, 0xd0?, 0x104475c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bd65?, 0xc0006b8dc0?, 0xc000a64b40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 895
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 895 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0023e0f80, 0xc0001842a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 834
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 894 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002648240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 834
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                    
x
+
TestDockerFlags (758.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-827000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1212 15:47:16.032614    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:47:42.769350    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:51:59.100594    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:52:16.045204    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:52:42.781784    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-827000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m36.77449737s)

                                                
                                                
-- stdout --
	* [docker-flags-827000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-827000 in cluster docker-flags-827000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-827000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:43:41.515778    9249 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:43:41.516900    9249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:43:41.516906    9249 out.go:309] Setting ErrFile to fd 2...
	I1212 15:43:41.516910    9249 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:43:41.517101    9249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:43:41.518607    9249 out.go:303] Setting JSON to false
	I1212 15:43:41.543757    9249 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6191,"bootTime":1702418430,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 15:43:41.543845    9249 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:43:41.565904    9249 out.go:177] * [docker-flags-827000] minikube v1.32.0 on Darwin 14.2
	I1212 15:43:41.607937    9249 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 15:43:41.608069    9249 notify.go:220] Checking for updates...
	I1212 15:43:41.651734    9249 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 15:43:41.672917    9249 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:43:41.693982    9249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:43:41.715758    9249 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 15:43:41.739169    9249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:43:41.761666    9249 config.go:182] Loaded profile config "force-systemd-flag-531000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:43:41.761809    9249 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:43:41.819822    9249 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 15:43:41.819995    9249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:43:41.923354    9249 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-12-12 23:43:41.911818594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:43:41.965618    9249 out.go:177] * Using the docker driver based on user configuration
	I1212 15:43:41.986781    9249 start.go:298] selected driver: docker
	I1212 15:43:41.986807    9249 start.go:902] validating driver "docker" against <nil>
	I1212 15:43:41.986826    9249 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:43:41.991344    9249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:43:42.095270    9249 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-12-12 23:43:42.084892487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:43:42.095466    9249 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:43:42.095649    9249 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1212 15:43:42.116673    9249 out.go:177] * Using Docker Desktop driver with root privileges
	I1212 15:43:42.137860    9249 cni.go:84] Creating CNI manager for ""
	I1212 15:43:42.137878    9249 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:43:42.137886    9249 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 15:43:42.137893    9249 start_flags.go:323] config:
	{Name:docker-flags-827000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-827000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1212 15:43:42.179649    9249 out.go:177] * Starting control plane node docker-flags-827000 in cluster docker-flags-827000
	I1212 15:43:42.201078    9249 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 15:43:42.222840    9249 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 15:43:42.264955    9249 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:43:42.265032    9249 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:43:42.265051    9249 cache.go:56] Caching tarball of preloaded images
	I1212 15:43:42.265048    9249 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 15:43:42.265276    9249 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:43:42.265302    9249 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:43:42.265456    9249 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/docker-flags-827000/config.json ...
	I1212 15:43:42.266191    9249 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/docker-flags-827000/config.json: {Name:mk6507095cd90035eb6505a4b234b694a6da7141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:43:42.318047    9249 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 15:43:42.318071    9249 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 15:43:42.318094    9249 cache.go:194] Successfully downloaded all kic artifacts
	I1212 15:43:42.318140    9249 start.go:365] acquiring machines lock for docker-flags-827000: {Name:mk64a6766c1bc690c81b843c996cc615acac4dc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:43:42.318265    9249 start.go:369] acquired machines lock for "docker-flags-827000" in 112.14µs
	I1212 15:43:42.318290    9249 start.go:93] Provisioning new machine with config: &{Name:docker-flags-827000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-827000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:43:42.318366    9249 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:43:42.339797    9249 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:43:42.340149    9249 start.go:159] libmachine.API.Create for "docker-flags-827000" (driver="docker")
	I1212 15:43:42.340190    9249 client.go:168] LocalClient.Create starting
	I1212 15:43:42.340350    9249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:43:42.340435    9249 main.go:141] libmachine: Decoding PEM data...
	I1212 15:43:42.340467    9249 main.go:141] libmachine: Parsing certificate...
	I1212 15:43:42.340562    9249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:43:42.340630    9249 main.go:141] libmachine: Decoding PEM data...
	I1212 15:43:42.340645    9249 main.go:141] libmachine: Parsing certificate...
	I1212 15:43:42.341745    9249 cli_runner.go:164] Run: docker network inspect docker-flags-827000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:43:42.393116    9249 cli_runner.go:211] docker network inspect docker-flags-827000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:43:42.393226    9249 network_create.go:281] running [docker network inspect docker-flags-827000] to gather additional debugging logs...
	I1212 15:43:42.393246    9249 cli_runner.go:164] Run: docker network inspect docker-flags-827000
	W1212 15:43:42.446355    9249 cli_runner.go:211] docker network inspect docker-flags-827000 returned with exit code 1
	I1212 15:43:42.446387    9249 network_create.go:284] error running [docker network inspect docker-flags-827000]: docker network inspect docker-flags-827000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-827000 not found
	I1212 15:43:42.446406    9249 network_create.go:286] output of [docker network inspect docker-flags-827000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-827000 not found
	
	** /stderr **
	I1212 15:43:42.446535    9249 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:43:42.499710    9249 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:43:42.501192    9249 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:43:42.502672    9249 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:43:42.503006    9249 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00240d5d0}
	I1212 15:43:42.503020    9249 network_create.go:124] attempt to create docker network docker-flags-827000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1212 15:43:42.503092    9249 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-827000 docker-flags-827000
	I1212 15:43:42.590883    9249 network_create.go:108] docker network docker-flags-827000 192.168.76.0/24 created
	I1212 15:43:42.590922    9249 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-827000" container
	I1212 15:43:42.591021    9249 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:43:42.644898    9249 cli_runner.go:164] Run: docker volume create docker-flags-827000 --label name.minikube.sigs.k8s.io=docker-flags-827000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:43:42.697255    9249 oci.go:103] Successfully created a docker volume docker-flags-827000
	I1212 15:43:42.697379    9249 cli_runner.go:164] Run: docker run --rm --name docker-flags-827000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-827000 --entrypoint /usr/bin/test -v docker-flags-827000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:43:43.079534    9249 oci.go:107] Successfully prepared a docker volume docker-flags-827000
	I1212 15:43:43.079578    9249 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:43:43.079592    9249 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:43:43.079686    9249 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-827000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:49:42.351640    9249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:49:42.351725    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:42.402509    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:49:42.402631    9249 retry.go:31] will retry after 265.099853ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:42.669552    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:42.721527    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:49:42.721649    9249 retry.go:31] will retry after 534.623531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:43.256595    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:43.308852    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:49:43.309022    9249 retry.go:31] will retry after 420.71559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:43.730020    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:43.780605    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	W1212 15:49:43.780721    9249 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	W1212 15:49:43.780750    9249 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:43.780810    9249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:49:43.780896    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:43.831779    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:49:43.831867    9249 retry.go:31] will retry after 354.007885ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:44.186429    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:44.237840    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:49:44.237938    9249 retry.go:31] will retry after 280.306371ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:44.520413    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:44.572741    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:49:44.572858    9249 retry.go:31] will retry after 587.367559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:45.160514    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:49:45.211051    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	W1212 15:49:45.211150    9249 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	W1212 15:49:45.211175    9249 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:45.211189    9249 start.go:128] duration metric: createHost completed in 6m2.881924999s
	I1212 15:49:45.211195    9249 start.go:83] releasing machines lock for "docker-flags-827000", held for 6m2.882037438s
	W1212 15:49:45.211213    9249 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1212 15:49:45.212108    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:45.262596    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:45.262661    9249 delete.go:82] Unable to get host status for docker-flags-827000, assuming it has already been deleted: state: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	W1212 15:49:45.262735    9249 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1212 15:49:45.262747    9249 start.go:709] Will try again in 5 seconds ...
	I1212 15:49:50.263088    9249 start.go:365] acquiring machines lock for docker-flags-827000: {Name:mk64a6766c1bc690c81b843c996cc615acac4dc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:49:50.263596    9249 start.go:369] acquired machines lock for "docker-flags-827000" in 475.28µs
	I1212 15:49:50.263648    9249 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:49:50.263659    9249 fix.go:54] fixHost starting: 
	I1212 15:49:50.263926    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:50.316496    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:50.316548    9249 fix.go:102] recreateIfNeeded on docker-flags-827000: state= err=unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:50.316564    9249 fix.go:107] machineExists: false. err=machine does not exist
	I1212 15:49:50.358438    9249 out.go:177] * docker "docker-flags-827000" container is missing, will recreate.
	I1212 15:49:50.379433    9249 delete.go:124] DEMOLISHING docker-flags-827000 ...
	I1212 15:49:50.379629    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:50.432785    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	W1212 15:49:50.432844    9249 stop.go:75] unable to get state: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:50.432861    9249 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:50.433260    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:50.485182    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:50.485234    9249 delete.go:82] Unable to get host status for docker-flags-827000, assuming it has already been deleted: state: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:50.485321    9249 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-827000
	W1212 15:49:50.536832    9249 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-827000 returned with exit code 1
	I1212 15:49:50.536882    9249 kic.go:371] could not find the container docker-flags-827000 to remove it. will try anyways
	I1212 15:49:50.536964    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:50.589009    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	W1212 15:49:50.589056    9249 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:50.589147    9249 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-827000 /bin/bash -c "sudo init 0"
	W1212 15:49:50.641451    9249 cli_runner.go:211] docker exec --privileged -t docker-flags-827000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 15:49:50.641485    9249 oci.go:650] error shutdown docker-flags-827000: docker exec --privileged -t docker-flags-827000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:51.641776    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:51.693117    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:51.693175    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:51.693186    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:49:51.693212    9249 retry.go:31] will retry after 745.490777ms: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:52.438873    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:52.490777    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:52.490824    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:52.490833    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:49:52.490858    9249 retry.go:31] will retry after 846.996342ms: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:53.338129    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:53.389011    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:53.389073    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:53.389082    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:49:53.389106    9249 retry.go:31] will retry after 685.823242ms: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:54.077168    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:54.129605    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:54.129666    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:54.129680    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:49:54.129705    9249 retry.go:31] will retry after 1.441535055s: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:55.571683    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:55.622396    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:55.622440    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:55.622458    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:49:55.622484    9249 retry.go:31] will retry after 3.28177291s: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:58.904640    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:49:58.956727    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:58.956776    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:49:58.956785    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:49:58.956810    9249 retry.go:31] will retry after 3.682890687s: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:50:02.639974    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:50:02.690690    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:50:02.690738    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:50:02.690747    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:50:02.690773    9249 retry.go:31] will retry after 8.155347585s: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:50:10.847287    9249 cli_runner.go:164] Run: docker container inspect docker-flags-827000 --format={{.State.Status}}
	W1212 15:50:10.898701    9249 cli_runner.go:211] docker container inspect docker-flags-827000 --format={{.State.Status}} returned with exit code 1
	I1212 15:50:10.898754    9249 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:50:10.898761    9249 oci.go:664] temporary error: container docker-flags-827000 status is  but expect it to be exited
	I1212 15:50:10.898795    9249 oci.go:88] couldn't shut down docker-flags-827000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	 
	I1212 15:50:10.898878    9249 cli_runner.go:164] Run: docker rm -f -v docker-flags-827000
	I1212 15:50:10.949657    9249 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-827000
	W1212 15:50:11.002190    9249 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-827000 returned with exit code 1
	I1212 15:50:11.002314    9249 cli_runner.go:164] Run: docker network inspect docker-flags-827000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:50:11.053517    9249 cli_runner.go:164] Run: docker network rm docker-flags-827000
	I1212 15:50:11.158481    9249 fix.go:114] Sleeping 1 second for extra luck!
	I1212 15:50:12.158815    9249 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:50:12.180409    9249 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:50:12.180511    9249 start.go:159] libmachine.API.Create for "docker-flags-827000" (driver="docker")
	I1212 15:50:12.180547    9249 client.go:168] LocalClient.Create starting
	I1212 15:50:12.180661    9249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:50:12.180715    9249 main.go:141] libmachine: Decoding PEM data...
	I1212 15:50:12.180731    9249 main.go:141] libmachine: Parsing certificate...
	I1212 15:50:12.180785    9249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:50:12.180825    9249 main.go:141] libmachine: Decoding PEM data...
	I1212 15:50:12.180833    9249 main.go:141] libmachine: Parsing certificate...
	I1212 15:50:12.202011    9249 cli_runner.go:164] Run: docker network inspect docker-flags-827000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:50:12.253655    9249 cli_runner.go:211] docker network inspect docker-flags-827000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:50:12.253745    9249 network_create.go:281] running [docker network inspect docker-flags-827000] to gather additional debugging logs...
	I1212 15:50:12.253766    9249 cli_runner.go:164] Run: docker network inspect docker-flags-827000
	W1212 15:50:12.304863    9249 cli_runner.go:211] docker network inspect docker-flags-827000 returned with exit code 1
	I1212 15:50:12.304901    9249 network_create.go:284] error running [docker network inspect docker-flags-827000]: docker network inspect docker-flags-827000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-827000 not found
	I1212 15:50:12.304916    9249 network_create.go:286] output of [docker network inspect docker-flags-827000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-827000 not found
	
	** /stderr **
	I1212 15:50:12.305049    9249 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:50:12.357656    9249 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:50:12.359130    9249 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:50:12.360696    9249 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:50:12.362173    9249 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:50:12.363609    9249 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:50:12.364842    9249 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022dd920}
	I1212 15:50:12.364861    9249 network_create.go:124] attempt to create docker network docker-flags-827000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1212 15:50:12.364937    9249 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-827000 docker-flags-827000
	I1212 15:50:12.453894    9249 network_create.go:108] docker network docker-flags-827000 192.168.94.0/24 created
	I1212 15:50:12.453933    9249 kic.go:121] calculated static IP "192.168.94.2" for the "docker-flags-827000" container
	I1212 15:50:12.454048    9249 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:50:12.509707    9249 cli_runner.go:164] Run: docker volume create docker-flags-827000 --label name.minikube.sigs.k8s.io=docker-flags-827000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:50:12.560614    9249 oci.go:103] Successfully created a docker volume docker-flags-827000
	I1212 15:50:12.560729    9249 cli_runner.go:164] Run: docker run --rm --name docker-flags-827000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-827000 --entrypoint /usr/bin/test -v docker-flags-827000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:50:12.869685    9249 oci.go:107] Successfully prepared a docker volume docker-flags-827000
	I1212 15:50:12.869721    9249 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:50:12.869734    9249 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:50:12.869837    9249 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-827000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:56:12.194533    9249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:56:12.194661    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:12.248057    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:12.248168    9249 retry.go:31] will retry after 165.711301ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:12.416304    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:12.468062    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:12.468159    9249 retry.go:31] will retry after 448.412223ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:12.918315    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:12.973239    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:12.973366    9249 retry.go:31] will retry after 721.344227ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:13.695059    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:13.749109    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	W1212 15:56:13.749225    9249 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	W1212 15:56:13.749245    9249 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:13.749310    9249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:56:13.749364    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:13.800766    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:13.800863    9249 retry.go:31] will retry after 344.981439ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:14.146330    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:14.200346    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:14.200446    9249 retry.go:31] will retry after 304.520569ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:14.505359    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:14.556650    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:14.556756    9249 retry.go:31] will retry after 781.885698ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:15.341085    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:15.395179    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	W1212 15:56:15.395279    9249 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	W1212 15:56:15.395304    9249 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:15.395317    9249 start.go:128] duration metric: createHost completed in 6m3.224775496s
	I1212 15:56:15.395389    9249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:56:15.395445    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:15.445251    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:15.445352    9249 retry.go:31] will retry after 127.712304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:15.574748    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:15.651285    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:15.651405    9249 retry.go:31] will retry after 293.552759ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:15.945203    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:15.997112    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:15.997201    9249 retry.go:31] will retry after 384.194445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:16.383804    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:16.438472    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:16.438562    9249 retry.go:31] will retry after 492.578935ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:16.932024    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:16.986384    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	W1212 15:56:16.986487    9249 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	W1212 15:56:16.986511    9249 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:16.986573    9249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:56:16.986626    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:17.036730    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:17.036835    9249 retry.go:31] will retry after 267.247125ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:17.304426    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:17.360679    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:17.360774    9249 retry.go:31] will retry after 270.784006ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:17.632432    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:17.686707    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	I1212 15:56:17.686808    9249 retry.go:31] will retry after 365.065567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:18.052536    9249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000
	W1212 15:56:18.105516    9249 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000 returned with exit code 1
	W1212 15:56:18.105623    9249 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	W1212 15:56:18.105646    9249 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-827000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-827000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	I1212 15:56:18.105659    9249 fix.go:56] fixHost completed within 6m27.828915324s
	I1212 15:56:18.105666    9249 start.go:83] releasing machines lock for "docker-flags-827000", held for 6m27.828945287s
	W1212 15:56:18.105739    9249 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-827000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-827000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 15:56:18.149111    9249 out.go:177] 
	W1212 15:56:18.171378    9249 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 15:56:18.171434    9249 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1212 15:56:18.171487    9249 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1212 15:56:18.193115    9249 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-827000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-827000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-827000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (199.369691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-827000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-827000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-827000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (199.676635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-827000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-827000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-12-12 15:56:18.66874 -0800 PST m=+6824.760128219
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-827000
helpers_test.go:235: (dbg) docker inspect docker-flags-827000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-827000",
	        "Id": "2efaf522c9f688b52838e0a4a8fb5320ec46504d8b307cbd38c2ed29e353b24d",
	        "Created": "2023-12-12T23:50:12.412198119Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-827000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-827000 -n docker-flags-827000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-827000 -n docker-flags-827000: exit status 7 (107.269588ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:56:18.828668    9729 status.go:249] status error: host: state: unknown state "docker-flags-827000": docker container inspect docker-flags-827000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-827000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-827000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-827000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-827000
--- FAIL: TestDockerFlags (758.07s)

                                                
                                    
x
+
TestForceSystemdFlag (753.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-531000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-531000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m31.990965224s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-531000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-531000 in cluster force-systemd-flag-531000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-531000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:43:12.446482    9130 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:43:12.446793    9130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:43:12.446799    9130 out.go:309] Setting ErrFile to fd 2...
	I1212 15:43:12.446803    9130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:43:12.446995    9130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:43:12.448502    9130 out.go:303] Setting JSON to false
	I1212 15:43:12.477027    9130 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6162,"bootTime":1702418430,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 15:43:12.477127    9130 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:43:12.498766    9130 out.go:177] * [force-systemd-flag-531000] minikube v1.32.0 on Darwin 14.2
	I1212 15:43:12.541675    9130 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 15:43:12.541731    9130 notify.go:220] Checking for updates...
	I1212 15:43:12.563638    9130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 15:43:12.584818    9130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:43:12.606631    9130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:43:12.629789    9130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 15:43:12.673818    9130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:43:12.695806    9130 config.go:182] Loaded profile config "force-systemd-env-347000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:43:12.695988    9130 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:43:12.753856    9130 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 15:43:12.754005    9130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:43:13.069504    9130 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-12-12 23:43:13.05838173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:43:13.111609    9130 out.go:177] * Using the docker driver based on user configuration
	I1212 15:43:13.132592    9130 start.go:298] selected driver: docker
	I1212 15:43:13.132632    9130 start.go:902] validating driver "docker" against <nil>
	I1212 15:43:13.132652    9130 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:43:13.140598    9130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:43:13.244219    9130 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-12-12 23:43:13.233374825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:43:13.244389    9130 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:43:13.244635    9130 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 15:43:13.265578    9130 out.go:177] * Using Docker Desktop driver with root privileges
	I1212 15:43:13.286806    9130 cni.go:84] Creating CNI manager for ""
	I1212 15:43:13.286853    9130 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:43:13.286883    9130 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 15:43:13.286913    9130 start_flags.go:323] config:
	{Name:force-systemd-flag-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-531000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:43:13.308726    9130 out.go:177] * Starting control plane node force-systemd-flag-531000 in cluster force-systemd-flag-531000
	I1212 15:43:13.329713    9130 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 15:43:13.351869    9130 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 15:43:13.373696    9130 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:43:13.373764    9130 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 15:43:13.373765    9130 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:43:13.373793    9130 cache.go:56] Caching tarball of preloaded images
	I1212 15:43:13.375261    9130 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:43:13.375357    9130 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:43:13.375700    9130 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/force-systemd-flag-531000/config.json ...
	I1212 15:43:13.375758    9130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/force-systemd-flag-531000/config.json: {Name:mk637cca7b2e7d77ff47d98d0503bd20373a24c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:43:13.427467    9130 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 15:43:13.427488    9130 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 15:43:13.427521    9130 cache.go:194] Successfully downloaded all kic artifacts
	I1212 15:43:13.427559    9130 start.go:365] acquiring machines lock for force-systemd-flag-531000: {Name:mkf41a0f470083189fe7e7bba8771cf1ed4ea224 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:43:13.427954    9130 start.go:369] acquired machines lock for "force-systemd-flag-531000" in 380.646µs
	I1212 15:43:13.427985    9130 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-531000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:43:13.428050    9130 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:43:13.470649    9130 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:43:13.471006    9130 start.go:159] libmachine.API.Create for "force-systemd-flag-531000" (driver="docker")
	I1212 15:43:13.471044    9130 client.go:168] LocalClient.Create starting
	I1212 15:43:13.471296    9130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:43:13.471753    9130 main.go:141] libmachine: Decoding PEM data...
	I1212 15:43:13.471806    9130 main.go:141] libmachine: Parsing certificate...
	I1212 15:43:13.471931    9130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:43:13.472219    9130 main.go:141] libmachine: Decoding PEM data...
	I1212 15:43:13.472243    9130 main.go:141] libmachine: Parsing certificate...
	I1212 15:43:13.473172    9130 cli_runner.go:164] Run: docker network inspect force-systemd-flag-531000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:43:13.524923    9130 cli_runner.go:211] docker network inspect force-systemd-flag-531000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:43:13.525013    9130 network_create.go:281] running [docker network inspect force-systemd-flag-531000] to gather additional debugging logs...
	I1212 15:43:13.525030    9130 cli_runner.go:164] Run: docker network inspect force-systemd-flag-531000
	W1212 15:43:13.576057    9130 cli_runner.go:211] docker network inspect force-systemd-flag-531000 returned with exit code 1
	I1212 15:43:13.576083    9130 network_create.go:284] error running [docker network inspect force-systemd-flag-531000]: docker network inspect force-systemd-flag-531000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-531000 not found
	I1212 15:43:13.576096    9130 network_create.go:286] output of [docker network inspect force-systemd-flag-531000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-531000 not found
	
	** /stderr **
	I1212 15:43:13.576239    9130 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:43:13.628660    9130 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:43:13.629073    9130 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002392f10}
	I1212 15:43:13.629087    9130 network_create.go:124] attempt to create docker network force-systemd-flag-531000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1212 15:43:13.629167    9130 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-531000 force-systemd-flag-531000
	W1212 15:43:13.680515    9130 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-531000 force-systemd-flag-531000 returned with exit code 1
	W1212 15:43:13.680557    9130 network_create.go:149] failed to create docker network force-systemd-flag-531000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-531000 force-systemd-flag-531000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1212 15:43:13.680578    9130 network_create.go:116] failed to create docker network force-systemd-flag-531000 192.168.58.0/24, will retry: subnet is taken
	I1212 15:43:13.681962    9130 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:43:13.682351    9130 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022baac0}
	I1212 15:43:13.682365    9130 network_create.go:124] attempt to create docker network force-systemd-flag-531000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1212 15:43:13.682431    9130 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-531000 force-systemd-flag-531000
	I1212 15:43:13.771336    9130 network_create.go:108] docker network force-systemd-flag-531000 192.168.67.0/24 created
	I1212 15:43:13.771382    9130 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-531000" container
	I1212 15:43:13.771494    9130 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:43:13.825063    9130 cli_runner.go:164] Run: docker volume create force-systemd-flag-531000 --label name.minikube.sigs.k8s.io=force-systemd-flag-531000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:43:13.877361    9130 oci.go:103] Successfully created a docker volume force-systemd-flag-531000
	I1212 15:43:13.877480    9130 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-531000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-531000 --entrypoint /usr/bin/test -v force-systemd-flag-531000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:43:14.287926    9130 oci.go:107] Successfully prepared a docker volume force-systemd-flag-531000
	I1212 15:43:14.287967    9130 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:43:14.287985    9130 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:43:14.288092    9130 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-531000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:49:13.482872    9130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:49:13.484516    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:13.536651    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:13.536795    9130 retry.go:31] will retry after 344.929833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:13.884074    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:13.936020    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:13.936133    9130 retry.go:31] will retry after 440.833522ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:14.377293    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:14.428917    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:14.429020    9130 retry.go:31] will retry after 342.514344ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:14.771933    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:14.824203    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:14.824320    9130 retry.go:31] will retry after 526.503694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:15.351140    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:15.402006    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	W1212 15:49:15.402115    9130 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	W1212 15:49:15.402132    9130 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:15.402194    9130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:49:15.402273    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:15.453625    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:15.453724    9130 retry.go:31] will retry after 147.971284ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:15.601875    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:15.652363    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:15.652463    9130 retry.go:31] will retry after 493.82947ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:16.146727    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:16.197410    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:16.197500    9130 retry.go:31] will retry after 301.121887ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:16.498938    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:49:16.553278    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	W1212 15:49:16.553382    9130 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	W1212 15:49:16.553403    9130 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:16.553429    9130 start.go:128] duration metric: createHost completed in 6m3.114475788s
	I1212 15:49:16.553437    9130 start.go:83] releasing machines lock for "force-systemd-flag-531000", held for 6m3.114581868s
	W1212 15:49:16.553452    9130 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1212 15:49:16.554342    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:16.604963    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:16.605018    9130 delete.go:82] Unable to get host status for force-systemd-flag-531000, assuming it has already been deleted: state: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	W1212 15:49:16.605114    9130 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1212 15:49:16.605127    9130 start.go:709] Will try again in 5 seconds ...
	I1212 15:49:21.605423    9130 start.go:365] acquiring machines lock for force-systemd-flag-531000: {Name:mkf41a0f470083189fe7e7bba8771cf1ed4ea224 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:49:21.605980    9130 start.go:369] acquired machines lock for "force-systemd-flag-531000" in 78.576µs
	I1212 15:49:21.606004    9130 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:49:21.606013    9130 fix.go:54] fixHost starting: 
	I1212 15:49:21.606275    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:21.658713    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:21.658757    9130 fix.go:102] recreateIfNeeded on force-systemd-flag-531000: state= err=unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:21.658777    9130 fix.go:107] machineExists: false. err=machine does not exist
	I1212 15:49:21.700416    9130 out.go:177] * docker "force-systemd-flag-531000" container is missing, will recreate.
	I1212 15:49:21.721376    9130 delete.go:124] DEMOLISHING force-systemd-flag-531000 ...
	I1212 15:49:21.721569    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:21.773135    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	W1212 15:49:21.773179    9130 stop.go:75] unable to get state: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:21.773200    9130 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:21.773590    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:21.825263    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:21.825325    9130 delete.go:82] Unable to get host status for force-systemd-flag-531000, assuming it has already been deleted: state: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:21.825416    9130 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-531000
	W1212 15:49:21.875826    9130 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:21.875862    9130 kic.go:371] could not find the container force-systemd-flag-531000 to remove it. will try anyways
	I1212 15:49:21.875956    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:21.927081    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	W1212 15:49:21.927144    9130 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:21.927242    9130 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-531000 /bin/bash -c "sudo init 0"
	W1212 15:49:21.979075    9130 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-531000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 15:49:21.979112    9130 oci.go:650] error shutdown force-systemd-flag-531000: docker exec --privileged -t force-systemd-flag-531000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:22.979311    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:23.031675    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:23.031730    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:23.031741    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:23.031766    9130 retry.go:31] will retry after 484.41484ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:23.516376    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:23.567603    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:23.567656    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:23.567667    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:23.567693    9130 retry.go:31] will retry after 864.870223ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:24.433440    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:24.487089    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:24.487166    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:24.487182    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:24.487208    9130 retry.go:31] will retry after 709.02621ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:25.196618    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:25.248038    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:25.248097    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:25.248110    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:25.248153    9130 retry.go:31] will retry after 2.032643882s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:27.281299    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:27.332904    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:27.332954    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:27.332971    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:27.332997    9130 retry.go:31] will retry after 3.777792588s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:31.111155    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:31.164161    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:31.164216    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:31.164228    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:31.164253    9130 retry.go:31] will retry after 5.660529968s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:36.825155    9130 cli_runner.go:164] Run: docker container inspect force-systemd-flag-531000 --format={{.State.Status}}
	W1212 15:49:36.876101    9130 cli_runner.go:211] docker container inspect force-systemd-flag-531000 --format={{.State.Status}} returned with exit code 1
	I1212 15:49:36.876151    9130 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:49:36.876163    9130 oci.go:664] temporary error: container force-systemd-flag-531000 status is  but expect it to be exited
	I1212 15:49:36.876193    9130 oci.go:88] couldn't shut down force-systemd-flag-531000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	 
	I1212 15:49:36.876280    9130 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-531000
	I1212 15:49:36.928152    9130 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-531000
	W1212 15:49:36.979819    9130 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:36.979951    9130 cli_runner.go:164] Run: docker network inspect force-systemd-flag-531000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:49:37.032098    9130 cli_runner.go:164] Run: docker network rm force-systemd-flag-531000
	I1212 15:49:37.143492    9130 fix.go:114] Sleeping 1 second for extra luck!
	I1212 15:49:38.143738    9130 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:49:38.164543    9130 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:49:38.164649    9130 start.go:159] libmachine.API.Create for "force-systemd-flag-531000" (driver="docker")
	I1212 15:49:38.164685    9130 client.go:168] LocalClient.Create starting
	I1212 15:49:38.165631    9130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:49:38.165888    9130 main.go:141] libmachine: Decoding PEM data...
	I1212 15:49:38.165908    9130 main.go:141] libmachine: Parsing certificate...
	I1212 15:49:38.166164    9130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:49:38.166359    9130 main.go:141] libmachine: Decoding PEM data...
	I1212 15:49:38.166372    9130 main.go:141] libmachine: Parsing certificate...
	I1212 15:49:38.186359    9130 cli_runner.go:164] Run: docker network inspect force-systemd-flag-531000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:49:38.238804    9130 cli_runner.go:211] docker network inspect force-systemd-flag-531000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:49:38.238903    9130 network_create.go:281] running [docker network inspect force-systemd-flag-531000] to gather additional debugging logs...
	I1212 15:49:38.238925    9130 cli_runner.go:164] Run: docker network inspect force-systemd-flag-531000
	W1212 15:49:38.289553    9130 cli_runner.go:211] docker network inspect force-systemd-flag-531000 returned with exit code 1
	I1212 15:49:38.289603    9130 network_create.go:284] error running [docker network inspect force-systemd-flag-531000]: docker network inspect force-systemd-flag-531000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-531000 not found
	I1212 15:49:38.289620    9130 network_create.go:286] output of [docker network inspect force-systemd-flag-531000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-531000 not found
	
	** /stderr **
	I1212 15:49:38.289751    9130 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:49:38.342242    9130 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:49:38.343717    9130 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:49:38.345209    9130 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:49:38.346810    9130 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:49:38.347672    9130 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002344000}
	I1212 15:49:38.347686    9130 network_create.go:124] attempt to create docker network force-systemd-flag-531000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1212 15:49:38.347750    9130 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-531000 force-systemd-flag-531000
	I1212 15:49:38.435725    9130 network_create.go:108] docker network force-systemd-flag-531000 192.168.85.0/24 created
	I1212 15:49:38.435779    9130 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-531000" container
	I1212 15:49:38.435892    9130 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:49:38.491855    9130 cli_runner.go:164] Run: docker volume create force-systemd-flag-531000 --label name.minikube.sigs.k8s.io=force-systemd-flag-531000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:49:38.543353    9130 oci.go:103] Successfully created a docker volume force-systemd-flag-531000
	I1212 15:49:38.543503    9130 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-531000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-531000 --entrypoint /usr/bin/test -v force-systemd-flag-531000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:49:38.856181    9130 oci.go:107] Successfully prepared a docker volume force-systemd-flag-531000
	I1212 15:49:38.856213    9130 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:49:38.856225    9130 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:49:38.856329    9130 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-531000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:55:38.179454    9130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:55:38.179608    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:38.233277    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:38.233407    9130 retry.go:31] will retry after 131.07572ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:38.366894    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:38.418371    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:38.418496    9130 retry.go:31] will retry after 482.830723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:38.903757    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:38.958040    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:38.958141    9130 retry.go:31] will retry after 331.014024ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:39.289505    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:39.345424    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:39.345523    9130 retry.go:31] will retry after 603.695367ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:39.949842    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:40.004312    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	W1212 15:55:40.004436    9130 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	W1212 15:55:40.004461    9130 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:40.004517    9130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:55:40.004597    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:40.054304    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:40.054400    9130 retry.go:31] will retry after 364.64236ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:40.420632    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:40.476370    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:40.476474    9130 retry.go:31] will retry after 288.907791ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:40.766485    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:40.818857    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:40.818960    9130 retry.go:31] will retry after 775.053164ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:41.596450    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:41.650346    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	W1212 15:55:41.650463    9130 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	W1212 15:55:41.650484    9130 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:41.650499    9130 start.go:128] duration metric: createHost completed in 6m3.494203383s
	I1212 15:55:41.650567    9130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:55:41.650622    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:41.701564    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:41.701657    9130 retry.go:31] will retry after 151.084998ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:41.853625    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:41.904713    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:41.904803    9130 retry.go:31] will retry after 408.340149ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:42.314031    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:42.368493    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:42.368592    9130 retry.go:31] will retry after 338.626675ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:42.707388    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:42.757286    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	W1212 15:55:42.757385    9130 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	W1212 15:55:42.757418    9130 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:42.757473    9130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:55:42.757541    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:42.808002    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:42.808112    9130 retry.go:31] will retry after 193.422839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:43.002273    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:43.055962    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:43.056057    9130 retry.go:31] will retry after 348.395607ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:43.406747    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:43.461234    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	I1212 15:55:43.461371    9130 retry.go:31] will retry after 735.42039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:44.197779    9130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000
	W1212 15:55:44.249322    9130 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000 returned with exit code 1
	W1212 15:55:44.249423    9130 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	W1212 15:55:44.249443    9130 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-531000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-531000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	I1212 15:55:44.249455    9130 fix.go:56] fixHost completed within 6m22.630339505s
	I1212 15:55:44.249463    9130 start.go:83] releasing machines lock for "force-systemd-flag-531000", held for 6m22.630370707s
	W1212 15:55:44.249551    9130 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-531000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-531000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 15:55:44.292836    9130 out.go:177] 
	W1212 15:55:44.313998    9130 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 15:55:44.314054    9130 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1212 15:55:44.314098    9130 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1212 15:55:44.357020    9130 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-531000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-531000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-531000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (225.128106ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-531000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-12-12 15:55:44.640371 -0800 PST m=+6790.732606431
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-531000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-531000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-531000",
	        "Id": "1c6baa44f53c643057ff890234d95139535213b3103804a82e9ee249d83d7e8e",
	        "Created": "2023-12-12T23:49:38.395192338Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-531000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-531000 -n force-systemd-flag-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-531000 -n force-systemd-flag-531000: exit status 7 (107.165383ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:55:44.799287    9607 status.go:249] status error: host: state: unknown state "force-systemd-flag-531000": docker container inspect force-systemd-flag-531000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-531000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-531000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-531000
--- FAIL: TestForceSystemdFlag (753.15s)

                                                
                                    
x
+
TestForceSystemdEnv (754.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-347000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1212 15:32:15.894957    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:32:42.628720    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:35:19.066502    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:37:16.016108    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:37:42.751623    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:40:45.810036    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:42:16.023292    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:42:42.760687    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-347000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.358278245s)

                                                
                                                
-- stdout --
	* [force-systemd-env-347000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-347000 in cluster force-systemd-env-347000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-347000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:31:06.887990    8763 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:31:06.888189    8763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:31:06.888194    8763 out.go:309] Setting ErrFile to fd 2...
	I1212 15:31:06.888198    8763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:31:06.888377    8763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:31:06.889833    8763 out.go:303] Setting JSON to false
	I1212 15:31:06.912700    8763 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5436,"bootTime":1702418430,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 15:31:06.912813    8763 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:31:06.936861    8763 out.go:177] * [force-systemd-env-347000] minikube v1.32.0 on Darwin 14.2
	I1212 15:31:06.977895    8763 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 15:31:06.978022    8763 notify.go:220] Checking for updates...
	I1212 15:31:07.001175    8763 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 15:31:07.022835    8763 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:31:07.045693    8763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:31:07.066876    8763 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 15:31:07.087759    8763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1212 15:31:07.109727    8763 config.go:182] Loaded profile config "offline-docker-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:31:07.109881    8763 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:31:07.166315    8763 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 15:31:07.166476    8763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:31:07.265489    8763 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-12-12 23:31:07.255711558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:31:07.287071    8763 out.go:177] * Using the docker driver based on user configuration
	I1212 15:31:07.307999    8763 start.go:298] selected driver: docker
	I1212 15:31:07.308026    8763 start.go:902] validating driver "docker" against <nil>
	I1212 15:31:07.308043    8763 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:31:07.312296    8763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:31:07.409456    8763 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-12-12 23:31:07.401094333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:31:07.409634    8763 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 15:31:07.409808    8763 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 15:31:07.431282    8763 out.go:177] * Using Docker Desktop driver with root privileges
	I1212 15:31:07.452289    8763 cni.go:84] Creating CNI manager for ""
	I1212 15:31:07.452332    8763 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 15:31:07.452353    8763 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 15:31:07.452367    8763 start_flags.go:323] config:
	{Name:force-systemd-env-347000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-347000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:31:07.496210    8763 out.go:177] * Starting control plane node force-systemd-env-347000 in cluster force-systemd-env-347000
	I1212 15:31:07.518341    8763 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 15:31:07.540287    8763 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 15:31:07.582309    8763 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:31:07.582378    8763 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:31:07.582412    8763 cache.go:56] Caching tarball of preloaded images
	I1212 15:31:07.582409    8763 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 15:31:07.582649    8763 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:31:07.582664    8763 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:31:07.582772    8763 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/force-systemd-env-347000/config.json ...
	I1212 15:31:07.582823    8763 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/force-systemd-env-347000/config.json: {Name:mk2b4dcb5032fa30c52b64283f8b03efffb57b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 15:31:07.635466    8763 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 15:31:07.635517    8763 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 15:31:07.635537    8763 cache.go:194] Successfully downloaded all kic artifacts
	I1212 15:31:07.635581    8763 start.go:365] acquiring machines lock for force-systemd-env-347000: {Name:mkf78878649e3360c4bb98e01066de84391be691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:31:07.635743    8763 start.go:369] acquired machines lock for "force-systemd-env-347000" in 148.57µs
	I1212 15:31:07.635771    8763 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-347000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-347000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 15:31:07.635836    8763 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:31:07.659321    8763 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:31:07.659688    8763 start.go:159] libmachine.API.Create for "force-systemd-env-347000" (driver="docker")
	I1212 15:31:07.659773    8763 client.go:168] LocalClient.Create starting
	I1212 15:31:07.659979    8763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:31:07.660076    8763 main.go:141] libmachine: Decoding PEM data...
	I1212 15:31:07.660118    8763 main.go:141] libmachine: Parsing certificate...
	I1212 15:31:07.660221    8763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:31:07.660290    8763 main.go:141] libmachine: Decoding PEM data...
	I1212 15:31:07.660311    8763 main.go:141] libmachine: Parsing certificate...
	I1212 15:31:07.661371    8763 cli_runner.go:164] Run: docker network inspect force-systemd-env-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:31:07.712433    8763 cli_runner.go:211] docker network inspect force-systemd-env-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:31:07.712534    8763 network_create.go:281] running [docker network inspect force-systemd-env-347000] to gather additional debugging logs...
	I1212 15:31:07.712552    8763 cli_runner.go:164] Run: docker network inspect force-systemd-env-347000
	W1212 15:31:07.762550    8763 cli_runner.go:211] docker network inspect force-systemd-env-347000 returned with exit code 1
	I1212 15:31:07.762583    8763 network_create.go:284] error running [docker network inspect force-systemd-env-347000]: docker network inspect force-systemd-env-347000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-347000 not found
	I1212 15:31:07.762596    8763 network_create.go:286] output of [docker network inspect force-systemd-env-347000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-347000 not found
	
	** /stderr **
	I1212 15:31:07.762738    8763 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:31:07.814118    8763 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:31:07.815767    8763 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:31:07.817389    8763 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:31:07.817757    8763 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023367d0}
	I1212 15:31:07.817770    8763 network_create.go:124] attempt to create docker network force-systemd-env-347000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1212 15:31:07.817836    8763 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-347000 force-systemd-env-347000
	I1212 15:31:07.903224    8763 network_create.go:108] docker network force-systemd-env-347000 192.168.76.0/24 created
	I1212 15:31:07.903262    8763 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-347000" container
	I1212 15:31:07.903380    8763 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:31:07.955544    8763 cli_runner.go:164] Run: docker volume create force-systemd-env-347000 --label name.minikube.sigs.k8s.io=force-systemd-env-347000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:31:08.006842    8763 oci.go:103] Successfully created a docker volume force-systemd-env-347000
	I1212 15:31:08.006960    8763 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-347000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-347000 --entrypoint /usr/bin/test -v force-systemd-env-347000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:31:08.400877    8763 oci.go:107] Successfully prepared a docker volume force-systemd-env-347000
	I1212 15:31:08.400929    8763 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:31:08.400944    8763 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:31:08.401033    8763 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-347000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:37:07.778615    8763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:37:07.778801    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:07.834065    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:37:07.834193    8763 retry.go:31] will retry after 138.302094ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:07.972940    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:08.027186    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:37:08.027277    8763 retry.go:31] will retry after 400.217679ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:08.429930    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:08.483876    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:37:08.484000    8763 retry.go:31] will retry after 738.282527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:09.223565    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:09.278644    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	W1212 15:37:09.278778    8763 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	W1212 15:37:09.278801    8763 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:09.278858    8763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:37:09.278927    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:09.331641    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:37:09.331763    8763 retry.go:31] will retry after 125.86879ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:09.458703    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:09.513079    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:37:09.513175    8763 retry.go:31] will retry after 374.774964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:09.888625    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:09.942752    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:37:09.942844    8763 retry.go:31] will retry after 829.408724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:10.773634    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:37:10.827512    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	W1212 15:37:10.827611    8763 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	W1212 15:37:10.827627    8763 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:10.827645    8763 start.go:128] duration metric: createHost completed in 6m3.074656181s
	I1212 15:37:10.827652    8763 start.go:83] releasing machines lock for "force-systemd-env-347000", held for 6m3.074761575s
	W1212 15:37:10.827666    8763 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1212 15:37:10.828283    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:10.881678    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:10.881733    8763 delete.go:82] Unable to get host status for force-systemd-env-347000, assuming it has already been deleted: state: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	W1212 15:37:10.881815    8763 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1212 15:37:10.881827    8763 start.go:709] Will try again in 5 seconds ...
	I1212 15:37:15.885362    8763 start.go:365] acquiring machines lock for force-systemd-env-347000: {Name:mkf78878649e3360c4bb98e01066de84391be691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:37:15.885583    8763 start.go:369] acquired machines lock for "force-systemd-env-347000" in 174.12µs
	I1212 15:37:15.885633    8763 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:37:15.885648    8763 fix.go:54] fixHost starting: 
	I1212 15:37:15.886207    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:15.940190    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:15.940235    8763 fix.go:102] recreateIfNeeded on force-systemd-env-347000: state= err=unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:15.940257    8763 fix.go:107] machineExists: false. err=machine does not exist
	I1212 15:37:15.983641    8763 out.go:177] * docker "force-systemd-env-347000" container is missing, will recreate.
	I1212 15:37:16.004619    8763 delete.go:124] DEMOLISHING force-systemd-env-347000 ...
	I1212 15:37:16.004852    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:16.058896    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	W1212 15:37:16.058954    8763 stop.go:75] unable to get state: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:16.058977    8763 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:16.059423    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:16.112217    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:16.112281    8763 delete.go:82] Unable to get host status for force-systemd-env-347000, assuming it has already been deleted: state: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:16.112375    8763 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-347000
	W1212 15:37:16.162906    8763 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-347000 returned with exit code 1
	I1212 15:37:16.162941    8763 kic.go:371] could not find the container force-systemd-env-347000 to remove it. will try anyways
	I1212 15:37:16.163014    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:16.212555    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	W1212 15:37:16.212613    8763 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:16.212717    8763 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-347000 /bin/bash -c "sudo init 0"
	W1212 15:37:16.262387    8763 cli_runner.go:211] docker exec --privileged -t force-systemd-env-347000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 15:37:16.262428    8763 oci.go:650] error shutdown force-systemd-env-347000: docker exec --privileged -t force-systemd-env-347000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:17.262806    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:17.321298    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:17.321363    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:17.321377    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:17.321402    8763 retry.go:31] will retry after 304.171882ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:17.627924    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:17.681535    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:17.681580    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:17.681595    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:17.681618    8763 retry.go:31] will retry after 716.230049ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:18.398225    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:18.454332    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:18.454378    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:18.454393    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:18.454427    8763 retry.go:31] will retry after 1.418000091s: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:19.872780    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:19.927213    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:19.927268    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:19.927287    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:19.927314    8763 retry.go:31] will retry after 1.089501868s: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:21.017214    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:21.070769    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:21.070815    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:21.070829    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:21.070857    8763 retry.go:31] will retry after 3.744412198s: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:24.815806    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:24.870647    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:24.870708    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:24.870721    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:24.870758    8763 retry.go:31] will retry after 3.377482285s: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:28.248607    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:28.300542    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:28.300595    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:28.300608    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:28.300634    8763 retry.go:31] will retry after 4.429948083s: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:32.731306    8763 cli_runner.go:164] Run: docker container inspect force-systemd-env-347000 --format={{.State.Status}}
	W1212 15:37:32.785383    8763 cli_runner.go:211] docker container inspect force-systemd-env-347000 --format={{.State.Status}} returned with exit code 1
	I1212 15:37:32.785432    8763 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:37:32.785443    8763 oci.go:664] temporary error: container force-systemd-env-347000 status is  but expect it to be exited
	I1212 15:37:32.785476    8763 oci.go:88] couldn't shut down force-systemd-env-347000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	 
	I1212 15:37:32.785556    8763 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-347000
	I1212 15:37:32.838777    8763 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-347000
	W1212 15:37:32.890849    8763 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-347000 returned with exit code 1
	I1212 15:37:32.890953    8763 cli_runner.go:164] Run: docker network inspect force-systemd-env-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:37:32.945192    8763 cli_runner.go:164] Run: docker network rm force-systemd-env-347000
	I1212 15:37:33.055522    8763 fix.go:114] Sleeping 1 second for extra luck!
	I1212 15:37:34.056064    8763 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:37:34.077486    8763 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1212 15:37:34.077657    8763 start.go:159] libmachine.API.Create for "force-systemd-env-347000" (driver="docker")
	I1212 15:37:34.077698    8763 client.go:168] LocalClient.Create starting
	I1212 15:37:34.077947    8763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:37:34.078044    8763 main.go:141] libmachine: Decoding PEM data...
	I1212 15:37:34.078084    8763 main.go:141] libmachine: Parsing certificate...
	I1212 15:37:34.078173    8763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:37:34.078243    8763 main.go:141] libmachine: Decoding PEM data...
	I1212 15:37:34.078258    8763 main.go:141] libmachine: Parsing certificate...
	I1212 15:37:34.079255    8763 cli_runner.go:164] Run: docker network inspect force-systemd-env-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:37:34.132906    8763 cli_runner.go:211] docker network inspect force-systemd-env-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:37:34.133012    8763 network_create.go:281] running [docker network inspect force-systemd-env-347000] to gather additional debugging logs...
	I1212 15:37:34.133030    8763 cli_runner.go:164] Run: docker network inspect force-systemd-env-347000
	W1212 15:37:34.185449    8763 cli_runner.go:211] docker network inspect force-systemd-env-347000 returned with exit code 1
	I1212 15:37:34.185482    8763 network_create.go:284] error running [docker network inspect force-systemd-env-347000]: docker network inspect force-systemd-env-347000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-347000 not found
	I1212 15:37:34.185495    8763 network_create.go:286] output of [docker network inspect force-systemd-env-347000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-347000 not found
	
	** /stderr **
	I1212 15:37:34.185659    8763 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:37:34.240183    8763 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:34.241663    8763 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:34.243305    8763 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:34.244867    8763 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:34.246551    8763 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:37:34.247565    8763 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021ff990}
	I1212 15:37:34.247582    8763 network_create.go:124] attempt to create docker network force-systemd-env-347000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1212 15:37:34.247660    8763 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-347000 force-systemd-env-347000
	I1212 15:37:34.340063    8763 network_create.go:108] docker network force-systemd-env-347000 192.168.94.0/24 created
	I1212 15:37:34.340117    8763 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-347000" container
	I1212 15:37:34.340220    8763 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:37:34.394922    8763 cli_runner.go:164] Run: docker volume create force-systemd-env-347000 --label name.minikube.sigs.k8s.io=force-systemd-env-347000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:37:34.447450    8763 oci.go:103] Successfully created a docker volume force-systemd-env-347000
	I1212 15:37:34.447580    8763 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-347000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-347000 --entrypoint /usr/bin/test -v force-systemd-env-347000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:37:34.773555    8763 oci.go:107] Successfully prepared a docker volume force-systemd-env-347000
	I1212 15:37:34.773602    8763 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:37:34.773616    8763 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:37:34.773762    8763 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-347000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:43:34.092116    8763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:43:34.092250    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:34.144493    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:34.144616    8763 retry.go:31] will retry after 191.612268ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:34.336774    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:34.388845    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:34.388975    8763 retry.go:31] will retry after 386.302208ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:34.775834    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:34.826822    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:34.826930    8763 retry.go:31] will retry after 452.21173ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:35.281087    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:35.336009    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	W1212 15:43:35.336111    8763 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	W1212 15:43:35.336125    8763 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:35.336187    8763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:43:35.336251    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:35.386652    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:35.386776    8763 retry.go:31] will retry after 324.147533ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:35.713281    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:35.765652    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:35.765755    8763 retry.go:31] will retry after 331.093583ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:36.097296    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:36.148924    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:36.149047    8763 retry.go:31] will retry after 614.206038ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:36.765591    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:36.817616    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	W1212 15:43:36.817722    8763 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	W1212 15:43:36.817741    8763 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:36.817770    8763 start.go:128] duration metric: createHost completed in 6m2.750802068s
	I1212 15:43:36.817834    8763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:43:36.817886    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:36.868898    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:36.868997    8763 retry.go:31] will retry after 171.984974ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:37.043305    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:37.096046    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:37.096140    8763 retry.go:31] will retry after 428.180679ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:37.525774    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:37.578002    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:37.578101    8763 retry.go:31] will retry after 548.325338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:38.126816    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:38.181504    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	W1212 15:43:38.181623    8763 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	W1212 15:43:38.181641    8763 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:38.181701    8763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:43:38.181770    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:38.232483    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:38.232575    8763 retry.go:31] will retry after 246.867017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:38.480057    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:38.532535    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:38.532631    8763 retry.go:31] will retry after 470.802924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:39.004326    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:39.059117    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:39.059203    8763 retry.go:31] will retry after 403.000298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:39.462727    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:39.515138    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	I1212 15:43:39.515237    8763 retry.go:31] will retry after 580.296965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:40.096112    8763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000
	W1212 15:43:40.147767    8763 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000 returned with exit code 1
	W1212 15:43:40.147865    8763 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	W1212 15:43:40.147884    8763 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	I1212 15:43:40.147902    8763 fix.go:56] fixHost completed within 6m24.250730326s
	I1212 15:43:40.147911    8763 start.go:83] releasing machines lock for "force-systemd-env-347000", held for 6m24.25078523s
	W1212 15:43:40.147991    8763 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-347000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-347000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 15:43:40.190789    8763 out.go:177] 
	W1212 15:43:40.212807    8763 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 15:43:40.212872    8763 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1212 15:43:40.212902    8763 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1212 15:43:40.256731    8763 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-347000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-347000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-347000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (215.097634ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-347000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-12-12 15:43:40.547721 -0800 PST m=+6066.663301061
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-347000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-347000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-347000",
	        "Id": "cb0cdfc98c4f7a065fb90cf5e3ed5b08f7456fbf55effadcd28af1802dae6a58",
	        "Created": "2023-12-12T23:37:34.296360017Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-347000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-347000 -n force-systemd-env-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-347000 -n force-systemd-env-347000: exit status 7 (109.235216ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:43:40.711188    9225 status.go:249] status error: host: state: unknown state "force-systemd-env-347000": docker container inspect force-systemd-env-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-347000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-347000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-347000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-347000
--- FAIL: TestForceSystemdEnv (754.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (260.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-299000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1212 14:17:15.830985    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:17:42.568359    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:42.574882    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:42.587015    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:42.607141    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:42.647284    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:42.727441    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:42.888501    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:43.209845    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:43.521205    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:17:43.850584    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:45.131088    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:47.691362    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:17:52.812634    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:18:03.052879    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:18:23.533359    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:19:04.494111    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-299000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m20.656274691s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-299000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-299000 in cluster ingress-addon-legacy-299000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:15:05.213960    4432 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:15:05.214166    4432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:15:05.214172    4432 out.go:309] Setting ErrFile to fd 2...
	I1212 14:15:05.214176    4432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:15:05.214355    4432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:15:05.215722    4432 out.go:303] Setting JSON to false
	I1212 14:15:05.238322    4432 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":875,"bootTime":1702418430,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:15:05.238426    4432 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:15:05.259983    4432 out.go:177] * [ingress-addon-legacy-299000] minikube v1.32.0 on Darwin 14.2
	I1212 14:15:05.302664    4432 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 14:15:05.302765    4432 notify.go:220] Checking for updates...
	I1212 14:15:05.346684    4432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:15:05.367587    4432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:15:05.409537    4432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:15:05.430811    4432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 14:15:05.451764    4432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 14:15:05.473277    4432 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:15:05.530624    4432 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:15:05.530796    4432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:15:05.630118    4432 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:false NGoroutines:54 SystemTime:2023-12-12 22:15:05.620611382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:15:05.672411    4432 out.go:177] * Using the docker driver based on user configuration
	I1212 14:15:05.693675    4432 start.go:298] selected driver: docker
	I1212 14:15:05.693702    4432 start.go:902] validating driver "docker" against <nil>
	I1212 14:15:05.693722    4432 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 14:15:05.698133    4432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:15:05.801088    4432 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:false NGoroutines:54 SystemTime:2023-12-12 22:15:05.79107276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:15:05.801258    4432 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 14:15:05.801467    4432 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 14:15:05.822549    4432 out.go:177] * Using Docker Desktop driver with root privileges
	I1212 14:15:05.843797    4432 cni.go:84] Creating CNI manager for ""
	I1212 14:15:05.843839    4432 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 14:15:05.843862    4432 start_flags.go:323] config:
	{Name:ingress-addon-legacy-299000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-299000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:15:05.886593    4432 out.go:177] * Starting control plane node ingress-addon-legacy-299000 in cluster ingress-addon-legacy-299000
	I1212 14:15:05.907720    4432 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 14:15:05.928619    4432 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 14:15:05.970834    4432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1212 14:15:05.970886    4432 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 14:15:06.023539    4432 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 14:15:06.023577    4432 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 14:15:06.027682    4432 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1212 14:15:06.027697    4432 cache.go:56] Caching tarball of preloaded images
	I1212 14:15:06.027882    4432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1212 14:15:06.070614    4432 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 14:15:06.093774    4432 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:15:06.172671    4432 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1212 14:15:11.801096    4432 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:15:11.801254    4432 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:15:12.427626    4432 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1212 14:15:12.427878    4432 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/config.json ...
	I1212 14:15:12.427904    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/config.json: {Name:mk14c66343033993d0d6d9897ead0cec102413e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:12.428195    4432 cache.go:194] Successfully downloaded all kic artifacts
	I1212 14:15:12.428229    4432 start.go:365] acquiring machines lock for ingress-addon-legacy-299000: {Name:mk420b7b304314ba87da5a605fcfda8b33e5a633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:15:12.428316    4432 start.go:369] acquired machines lock for "ingress-addon-legacy-299000" in 78.626µs
	I1212 14:15:12.428338    4432 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-299000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-299000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 14:15:12.428390    4432 start.go:125] createHost starting for "" (driver="docker")
	I1212 14:15:12.477362    4432 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 14:15:12.477535    4432 start.go:159] libmachine.API.Create for "ingress-addon-legacy-299000" (driver="docker")
	I1212 14:15:12.477558    4432 client.go:168] LocalClient.Create starting
	I1212 14:15:12.477671    4432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 14:15:12.477717    4432 main.go:141] libmachine: Decoding PEM data...
	I1212 14:15:12.477733    4432 main.go:141] libmachine: Parsing certificate...
	I1212 14:15:12.477779    4432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 14:15:12.477814    4432 main.go:141] libmachine: Decoding PEM data...
	I1212 14:15:12.477821    4432 main.go:141] libmachine: Parsing certificate...
	I1212 14:15:12.478262    4432 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-299000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 14:15:12.529570    4432 cli_runner.go:211] docker network inspect ingress-addon-legacy-299000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 14:15:12.529687    4432 network_create.go:281] running [docker network inspect ingress-addon-legacy-299000] to gather additional debugging logs...
	I1212 14:15:12.529708    4432 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-299000
	W1212 14:15:12.579982    4432 cli_runner.go:211] docker network inspect ingress-addon-legacy-299000 returned with exit code 1
	I1212 14:15:12.580015    4432 network_create.go:284] error running [docker network inspect ingress-addon-legacy-299000]: docker network inspect ingress-addon-legacy-299000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-299000 not found
	I1212 14:15:12.580034    4432 network_create.go:286] output of [docker network inspect ingress-addon-legacy-299000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-299000 not found
	
	** /stderr **
	I1212 14:15:12.580166    4432 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 14:15:12.631140    4432 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022d9460}
	I1212 14:15:12.631177    4432 network_create.go:124] attempt to create docker network ingress-addon-legacy-299000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1212 14:15:12.631247    4432 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-299000 ingress-addon-legacy-299000
	I1212 14:15:12.716458    4432 network_create.go:108] docker network ingress-addon-legacy-299000 192.168.49.0/24 created
	I1212 14:15:12.716502    4432 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-299000" container
	I1212 14:15:12.716627    4432 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 14:15:12.766843    4432 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-299000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-299000 --label created_by.minikube.sigs.k8s.io=true
	I1212 14:15:12.818791    4432 oci.go:103] Successfully created a docker volume ingress-addon-legacy-299000
	I1212 14:15:12.818908    4432 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-299000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-299000 --entrypoint /usr/bin/test -v ingress-addon-legacy-299000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 14:15:13.200178    4432 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-299000
	I1212 14:15:13.200234    4432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1212 14:15:13.200247    4432 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 14:15:13.200360    4432 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-299000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 14:15:15.610363    4432 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-299000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir: (2.409882314s)
	I1212 14:15:15.610394    4432 kic.go:203] duration metric: took 2.410136 seconds to extract preloaded images to volume
	I1212 14:15:15.610506    4432 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 14:15:15.709395    4432 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-299000 --name ingress-addon-legacy-299000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-299000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-299000 --network ingress-addon-legacy-299000 --ip 192.168.49.2 --volume ingress-addon-legacy-299000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 14:15:15.962987    4432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Running}}
	I1212 14:15:16.017431    4432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:15:16.076853    4432 cli_runner.go:164] Run: docker exec ingress-addon-legacy-299000 stat /var/lib/dpkg/alternatives/iptables
	I1212 14:15:16.194775    4432 oci.go:144] the created container "ingress-addon-legacy-299000" has a running status.
	I1212 14:15:16.194813    4432 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa...
	I1212 14:15:16.637022    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 14:15:16.637140    4432 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 14:15:16.699036    4432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:15:16.749848    4432 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 14:15:16.749869    4432 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-299000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 14:15:16.840634    4432 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:15:16.891726    4432 machine.go:88] provisioning docker machine ...
	I1212 14:15:16.891772    4432 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-299000"
	I1212 14:15:16.891872    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:16.943501    4432 main.go:141] libmachine: Using SSH client type: native
	I1212 14:15:16.943850    4432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 50498 <nil> <nil>}
	I1212 14:15:16.943867    4432 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-299000 && echo "ingress-addon-legacy-299000" | sudo tee /etc/hostname
	I1212 14:15:17.075524    4432 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-299000
	
	I1212 14:15:17.075655    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:17.127180    4432 main.go:141] libmachine: Using SSH client type: native
	I1212 14:15:17.127483    4432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 50498 <nil> <nil>}
	I1212 14:15:17.127502    4432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-299000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-299000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-299000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 14:15:17.252433    4432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 14:15:17.252457    4432 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17761-876/.minikube CaCertPath:/Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17761-876/.minikube}
	I1212 14:15:17.252476    4432 ubuntu.go:177] setting up certificates
	I1212 14:15:17.252492    4432 provision.go:83] configureAuth start
	I1212 14:15:17.252564    4432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-299000
	I1212 14:15:17.302837    4432 provision.go:138] copyHostCerts
	I1212 14:15:17.302878    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17761-876/.minikube/cert.pem
	I1212 14:15:17.302937    4432 exec_runner.go:144] found /Users/jenkins/minikube-integration/17761-876/.minikube/cert.pem, removing ...
	I1212 14:15:17.302944    4432 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17761-876/.minikube/cert.pem
	I1212 14:15:17.303075    4432 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17761-876/.minikube/cert.pem (1123 bytes)
	I1212 14:15:17.303274    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17761-876/.minikube/key.pem
	I1212 14:15:17.303311    4432 exec_runner.go:144] found /Users/jenkins/minikube-integration/17761-876/.minikube/key.pem, removing ...
	I1212 14:15:17.303317    4432 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17761-876/.minikube/key.pem
	I1212 14:15:17.303407    4432 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17761-876/.minikube/key.pem (1675 bytes)
	I1212 14:15:17.303545    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17761-876/.minikube/ca.pem
	I1212 14:15:17.303572    4432 exec_runner.go:144] found /Users/jenkins/minikube-integration/17761-876/.minikube/ca.pem, removing ...
	I1212 14:15:17.303577    4432 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17761-876/.minikube/ca.pem
	I1212 14:15:17.303645    4432 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17761-876/.minikube/ca.pem (1078 bytes)
	I1212 14:15:17.303795    4432 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17761-876/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-299000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-299000]
	I1212 14:15:17.391921    4432 provision.go:172] copyRemoteCerts
	I1212 14:15:17.391969    4432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 14:15:17.392041    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:17.442672    4432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:15:17.530127    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 14:15:17.530221    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1212 14:15:17.550090    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 14:15:17.550153    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 14:15:17.570180    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 14:15:17.570262    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 14:15:17.590492    4432 provision.go:86] duration metric: configureAuth took 337.982449ms
	I1212 14:15:17.590507    4432 ubuntu.go:193] setting minikube options for container-runtime
	I1212 14:15:17.590661    4432 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:15:17.590729    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:17.641873    4432 main.go:141] libmachine: Using SSH client type: native
	I1212 14:15:17.642176    4432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 50498 <nil> <nil>}
	I1212 14:15:17.642189    4432 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 14:15:17.764963    4432 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1212 14:15:17.764975    4432 ubuntu.go:71] root file system type: overlay
	I1212 14:15:17.765066    4432 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 14:15:17.765153    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:17.816101    4432 main.go:141] libmachine: Using SSH client type: native
	I1212 14:15:17.816433    4432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 50498 <nil> <nil>}
	I1212 14:15:17.816484    4432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 14:15:17.947121    4432 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 14:15:17.947208    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:18.000055    4432 main.go:141] libmachine: Using SSH client type: native
	I1212 14:15:18.000351    4432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 127.0.0.1 50498 <nil> <nil>}
	I1212 14:15:18.000370    4432 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 14:15:18.567537    4432 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-12 22:15:17.944709717 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1212 14:15:18.567564    4432 machine.go:91] provisioned docker machine in 1.67580562s
	I1212 14:15:18.567575    4432 client.go:171] LocalClient.Create took 6.089982278s
	I1212 14:15:18.567591    4432 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-299000" took 6.09002941s
	I1212 14:15:18.567600    4432 start.go:300] post-start starting for "ingress-addon-legacy-299000" (driver="docker")
	I1212 14:15:18.567609    4432 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 14:15:18.567675    4432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 14:15:18.567734    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:18.619749    4432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:15:18.709843    4432 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 14:15:18.713694    4432 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 14:15:18.713732    4432 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 14:15:18.713740    4432 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 14:15:18.713746    4432 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 14:15:18.713756    4432 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17761-876/.minikube/addons for local assets ...
	I1212 14:15:18.713851    4432 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17761-876/.minikube/files for local assets ...
	I1212 14:15:18.714075    4432 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/13362.pem -> 13362.pem in /etc/ssl/certs
	I1212 14:15:18.714083    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/13362.pem -> /etc/ssl/certs/13362.pem
	I1212 14:15:18.714298    4432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 14:15:18.722181    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/13362.pem --> /etc/ssl/certs/13362.pem (1708 bytes)
	I1212 14:15:18.742177    4432 start.go:303] post-start completed in 174.567035ms
	I1212 14:15:18.742753    4432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-299000
	I1212 14:15:18.793079    4432 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/config.json ...
	I1212 14:15:18.793550    4432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 14:15:18.793611    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:18.843961    4432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:15:18.930945    4432 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 14:15:18.936065    4432 start.go:128] duration metric: createHost completed in 6.507633348s
	I1212 14:15:18.936080    4432 start.go:83] releasing machines lock for "ingress-addon-legacy-299000", held for 6.507728187s
	I1212 14:15:18.936155    4432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-299000
	I1212 14:15:18.986471    4432 ssh_runner.go:195] Run: cat /version.json
	I1212 14:15:18.986493    4432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 14:15:18.986546    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:18.986560    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:19.042011    4432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:15:19.042137    4432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:15:19.231475    4432 ssh_runner.go:195] Run: systemctl --version
	I1212 14:15:19.236328    4432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 14:15:19.241235    4432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1212 14:15:19.262612    4432 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1212 14:15:19.262693    4432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1212 14:15:19.277591    4432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1212 14:15:19.292173    4432 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 14:15:19.292207    4432 start.go:475] detecting cgroup driver to use...
	I1212 14:15:19.292219    4432 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 14:15:19.292337    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 14:15:19.306982    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1212 14:15:19.316270    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 14:15:19.325384    4432 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 14:15:19.325463    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 14:15:19.334814    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 14:15:19.343944    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 14:15:19.353013    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 14:15:19.362169    4432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 14:15:19.370905    4432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 14:15:19.380188    4432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 14:15:19.388255    4432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 14:15:19.395959    4432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 14:15:19.450575    4432 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 14:15:19.528855    4432 start.go:475] detecting cgroup driver to use...
	I1212 14:15:19.528876    4432 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 14:15:19.528943    4432 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 14:15:19.540770    4432 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1212 14:15:19.540840    4432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 14:15:19.551974    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 14:15:19.568626    4432 ssh_runner.go:195] Run: which cri-dockerd
	I1212 14:15:19.573460    4432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 14:15:19.582983    4432 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 14:15:19.601003    4432 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 14:15:19.688524    4432 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 14:15:19.746931    4432 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 14:15:19.747047    4432 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 14:15:19.784987    4432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 14:15:19.844616    4432 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 14:15:20.089528    4432 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 14:15:20.112566    4432 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 14:15:20.180536    4432 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1212 14:15:20.180669    4432 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-299000 dig +short host.docker.internal
	I1212 14:15:20.310366    4432 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1212 14:15:20.310466    4432 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1212 14:15:20.315016    4432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 14:15:20.325759    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:15:20.376192    4432 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1212 14:15:20.376288    4432 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 14:15:20.395068    4432 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1212 14:15:20.395085    4432 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1212 14:15:20.395152    4432 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 14:15:20.403712    4432 ssh_runner.go:195] Run: which lz4
	I1212 14:15:20.407438    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 14:15:20.407541    4432 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 14:15:20.411630    4432 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 14:15:20.411656    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1212 14:15:26.172878    4432 docker.go:635] Took 5.765351 seconds to copy over tarball
	I1212 14:15:26.172951    4432 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 14:15:27.810294    4432 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.637289824s)
	I1212 14:15:27.810312    4432 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 14:15:27.853945    4432 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 14:15:27.862687    4432 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1212 14:15:27.878181    4432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 14:15:27.931105    4432 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 14:15:28.997684    4432 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.066552961s)
	I1212 14:15:28.997777    4432 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 14:15:29.018055    4432 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1212 14:15:29.018080    4432 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1212 14:15:29.018095    4432 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 14:15:29.024092    4432 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 14:15:29.024980    4432 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 14:15:29.026061    4432 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 14:15:29.026115    4432 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 14:15:29.026134    4432 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 14:15:29.026216    4432 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 14:15:29.026247    4432 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 14:15:29.026750    4432 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 14:15:29.030377    4432 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 14:15:29.031978    4432 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 14:15:29.032087    4432 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 14:15:29.033056    4432 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 14:15:29.033069    4432 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 14:15:29.033252    4432 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 14:15:29.033264    4432 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 14:15:29.033527    4432 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 14:15:29.667473    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 14:15:29.677300    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 14:15:29.678407    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 14:15:29.687040    4432 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 14:15:29.687088    4432 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1212 14:15:29.687157    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1212 14:15:29.693643    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1212 14:15:29.695449    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 14:15:29.700893    4432 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1212 14:15:29.700933    4432 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 14:15:29.700936    4432 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1212 14:15:29.700960    4432 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 14:15:29.701014    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 14:15:29.701024    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 14:15:29.704786    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1212 14:15:29.714433    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 14:15:29.764687    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1212 14:15:29.773666    4432 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1212 14:15:29.773701    4432 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 14:15:29.773780    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1212 14:15:29.782601    4432 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1212 14:15:29.782644    4432 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 14:15:29.782743    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1212 14:15:29.787687    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 14:15:29.789120    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 14:15:29.789215    4432 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1212 14:15:29.789237    4432 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 14:15:29.789356    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 14:15:29.874599    4432 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1212 14:15:29.874638    4432 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 14:15:29.874731    4432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 14:15:29.875335    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1212 14:15:29.878559    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 14:15:29.885124    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 14:15:29.893742    4432 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 14:15:30.397374    4432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 14:15:30.417458    4432 cache_images.go:92] LoadImages completed in 1.399338063s
	W1212 14:15:30.417516    4432 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17761-876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1212 14:15:30.417603    4432 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 14:15:30.464256    4432 cni.go:84] Creating CNI manager for ""
	I1212 14:15:30.464275    4432 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 14:15:30.464292    4432 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 14:15:30.464312    4432 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-299000 NodeName:ingress-addon-legacy-299000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 14:15:30.464401    4432 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-299000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 14:15:30.464479    4432 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-299000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-299000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 14:15:30.464560    4432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 14:15:30.473802    4432 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 14:15:30.473864    4432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 14:15:30.483379    4432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1212 14:15:30.498639    4432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 14:15:30.513906    4432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1212 14:15:30.529318    4432 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 14:15:30.533465    4432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 14:15:30.543804    4432 certs.go:56] Setting up /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000 for IP: 192.168.49.2
	I1212 14:15:30.543826    4432 certs.go:190] acquiring lock for shared ca certs: {Name:mk579adac7bb4a1b7043de11f848f9ebb7aec125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.543998    4432 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17761-876/.minikube/ca.key
	I1212 14:15:30.544067    4432 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17761-876/.minikube/proxy-client-ca.key
	I1212 14:15:30.544111    4432 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/client.key
	I1212 14:15:30.544135    4432 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/client.crt with IP's: []
	I1212 14:15:30.630762    4432 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/client.crt ...
	I1212 14:15:30.630779    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/client.crt: {Name:mk83831507e08781054d62a6f71e6585e29910f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.631069    4432 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/client.key ...
	I1212 14:15:30.631078    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/client.key: {Name:mk98b1cb5f30556861bcbabe2a77298c6b74e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.631354    4432 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key.dd3b5fb2
	I1212 14:15:30.631371    4432 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 14:15:30.712095    4432 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt.dd3b5fb2 ...
	I1212 14:15:30.712105    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt.dd3b5fb2: {Name:mkd3dc5107e9f6081130df35acfd9f5cee4516e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.712355    4432 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key.dd3b5fb2 ...
	I1212 14:15:30.712364    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key.dd3b5fb2: {Name:mkbda52c4e389c247b9533ea06a17da551678a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.712569    4432 certs.go:337] copying /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt
	I1212 14:15:30.712750    4432 certs.go:341] copying /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key
	I1212 14:15:30.712925    4432 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.key
	I1212 14:15:30.712940    4432 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.crt with IP's: []
	I1212 14:15:30.854408    4432 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.crt ...
	I1212 14:15:30.854418    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.crt: {Name:mk2df24e39435864d03fc4dcbfbbd7f30286c06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.854677    4432 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.key ...
	I1212 14:15:30.854690    4432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.key: {Name:mkf5d225d013eaea6ebccf8a61dd104b450b2c33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:15:30.854883    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 14:15:30.854910    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 14:15:30.854927    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 14:15:30.854943    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 14:15:30.854963    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 14:15:30.854983    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 14:15:30.855002    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 14:15:30.855019    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 14:15:30.855108    4432 certs.go:437] found cert: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/Users/jenkins/minikube-integration/17761-876/.minikube/certs/1336.pem (1338 bytes)
	W1212 14:15:30.855160    4432 certs.go:433] ignoring /Users/jenkins/minikube-integration/17761-876/.minikube/certs/Users/jenkins/minikube-integration/17761-876/.minikube/certs/1336_empty.pem, impossibly tiny 0 bytes
	I1212 14:15:30.855170    4432 certs.go:437] found cert: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 14:15:30.855201    4432 certs.go:437] found cert: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem (1078 bytes)
	I1212 14:15:30.855228    4432 certs.go:437] found cert: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem (1123 bytes)
	I1212 14:15:30.855260    4432 certs.go:437] found cert: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/Users/jenkins/minikube-integration/17761-876/.minikube/certs/key.pem (1675 bytes)
	I1212 14:15:30.855324    4432 certs.go:437] found cert: /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/13362.pem (1708 bytes)
	I1212 14:15:30.855362    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 14:15:30.855384    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/certs/1336.pem -> /usr/share/ca-certificates/1336.pem
	I1212 14:15:30.855404    4432 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/13362.pem -> /usr/share/ca-certificates/13362.pem
	I1212 14:15:30.855818    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 14:15:30.876276    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 14:15:30.896708    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 14:15:30.916759    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/ingress-addon-legacy-299000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 14:15:30.936795    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 14:15:30.957150    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 14:15:30.977481    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 14:15:30.998311    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 14:15:31.018333    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 14:15:31.038764    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/certs/1336.pem --> /usr/share/ca-certificates/1336.pem (1338 bytes)
	I1212 14:15:31.059471    4432 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/ssl/certs/13362.pem --> /usr/share/ca-certificates/13362.pem (1708 bytes)
	I1212 14:15:31.080050    4432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 14:15:31.095364    4432 ssh_runner.go:195] Run: openssl version
	I1212 14:15:31.101105    4432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 14:15:31.110112    4432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 14:15:31.114088    4432 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:05 /usr/share/ca-certificates/minikubeCA.pem
	I1212 14:15:31.114135    4432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 14:15:31.120590    4432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 14:15:31.129411    4432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1336.pem && ln -fs /usr/share/ca-certificates/1336.pem /etc/ssl/certs/1336.pem"
	I1212 14:15:31.138275    4432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1336.pem
	I1212 14:15:31.142187    4432 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:10 /usr/share/ca-certificates/1336.pem
	I1212 14:15:31.142227    4432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1336.pem
	I1212 14:15:31.148753    4432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1336.pem /etc/ssl/certs/51391683.0"
	I1212 14:15:31.157606    4432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13362.pem && ln -fs /usr/share/ca-certificates/13362.pem /etc/ssl/certs/13362.pem"
	I1212 14:15:31.166668    4432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13362.pem
	I1212 14:15:31.171223    4432 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:10 /usr/share/ca-certificates/13362.pem
	I1212 14:15:31.171269    4432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13362.pem
	I1212 14:15:31.177691    4432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13362.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 14:15:31.186355    4432 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 14:15:31.190493    4432 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 14:15:31.190539    4432 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-299000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-299000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:15:31.190665    4432 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 14:15:31.208307    4432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 14:15:31.216980    4432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 14:15:31.225190    4432 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 14:15:31.225244    4432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 14:15:31.233408    4432 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 14:15:31.233456    4432 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 14:15:31.296106    4432 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 14:15:31.296190    4432 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 14:15:31.547267    4432 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 14:15:31.547360    4432 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 14:15:31.547483    4432 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 14:15:31.715080    4432 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 14:15:31.715957    4432 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 14:15:31.716003    4432 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 14:15:31.784298    4432 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 14:15:31.804912    4432 out.go:204]   - Generating certificates and keys ...
	I1212 14:15:31.805005    4432 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 14:15:31.805076    4432 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 14:15:32.003503    4432 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 14:15:32.112643    4432 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 14:15:32.278670    4432 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 14:15:32.711469    4432 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 14:15:32.780832    4432 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 14:15:32.780941    4432 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-299000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 14:15:32.822800    4432 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 14:15:32.822922    4432 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-299000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 14:15:33.015387    4432 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 14:15:33.161668    4432 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 14:15:33.286432    4432 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 14:15:33.286546    4432 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 14:15:33.392388    4432 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 14:15:33.505205    4432 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 14:15:33.600943    4432 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 14:15:33.687126    4432 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 14:15:33.688226    4432 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 14:15:33.710079    4432 out.go:204]   - Booting up control plane ...
	I1212 14:15:33.710291    4432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 14:15:33.710443    4432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 14:15:33.710579    4432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 14:15:33.710732    4432 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 14:15:33.710988    4432 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 14:16:13.698513    4432 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1212 14:16:13.699256    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:16:13.699543    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:16:18.700937    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:16:18.701095    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:16:28.703112    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:16:28.703344    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:16:48.704789    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:16:48.705000    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:17:28.706600    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:17:28.706887    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:17:28.706913    4432 kubeadm.go:322] 
	I1212 14:17:28.706980    4432 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1212 14:17:28.707057    4432 kubeadm.go:322] 		timed out waiting for the condition
	I1212 14:17:28.707069    4432 kubeadm.go:322] 
	I1212 14:17:28.707125    4432 kubeadm.go:322] 	This error is likely caused by:
	I1212 14:17:28.707176    4432 kubeadm.go:322] 		- The kubelet is not running
	I1212 14:17:28.707351    4432 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 14:17:28.707362    4432 kubeadm.go:322] 
	I1212 14:17:28.707546    4432 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 14:17:28.707606    4432 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1212 14:17:28.707645    4432 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1212 14:17:28.707651    4432 kubeadm.go:322] 
	I1212 14:17:28.707806    4432 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 14:17:28.707909    4432 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 14:17:28.707924    4432 kubeadm.go:322] 
	I1212 14:17:28.708058    4432 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1212 14:17:28.708121    4432 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1212 14:17:28.708212    4432 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1212 14:17:28.708240    4432 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1212 14:17:28.708247    4432 kubeadm.go:322] 
	I1212 14:17:28.709388    4432 kubeadm.go:322] W1212 22:15:31.294939    1697 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 14:17:28.709535    4432 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1212 14:17:28.709611    4432 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1212 14:17:28.709726    4432 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1212 14:17:28.709807    4432 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 14:17:28.709912    4432 kubeadm.go:322] W1212 22:15:33.693019    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 14:17:28.710047    4432 kubeadm.go:322] W1212 22:15:33.694069    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 14:17:28.710111    4432 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 14:17:28.710169    4432 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1212 14:17:28.710263    4432 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-299000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-299000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:15:31.294939    1697 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:15:33.693019    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:15:33.694069    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-299000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-299000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:15:31.294939    1697 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:15:33.693019    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:15:33.694069    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 14:17:28.710297    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1212 14:17:29.117219    4432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 14:17:29.127515    4432 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 14:17:29.127575    4432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 14:17:29.135807    4432 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 14:17:29.135834    4432 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 14:17:29.187804    4432 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 14:17:29.187862    4432 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 14:17:29.412870    4432 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 14:17:29.413013    4432 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 14:17:29.413102    4432 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 14:17:29.582197    4432 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 14:17:29.582987    4432 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 14:17:29.583038    4432 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 14:17:29.663782    4432 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 14:17:29.685240    4432 out.go:204]   - Generating certificates and keys ...
	I1212 14:17:29.685343    4432 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 14:17:29.685468    4432 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 14:17:29.685587    4432 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 14:17:29.685670    4432 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 14:17:29.685769    4432 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 14:17:29.685843    4432 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 14:17:29.685969    4432 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 14:17:29.686043    4432 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 14:17:29.686120    4432 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 14:17:29.686182    4432 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 14:17:29.686213    4432 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 14:17:29.686271    4432 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 14:17:29.925410    4432 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 14:17:30.080160    4432 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 14:17:30.200820    4432 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 14:17:30.287207    4432 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 14:17:30.287730    4432 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 14:17:30.310765    4432 out.go:204]   - Booting up control plane ...
	I1212 14:17:30.310919    4432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 14:17:30.311048    4432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 14:17:30.311205    4432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 14:17:30.311347    4432 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 14:17:30.311568    4432 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 14:18:10.297400    4432 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1212 14:18:10.298184    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:18:10.298442    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:18:15.299252    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:18:15.299410    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:18:25.300272    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:18:25.300435    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:18:45.302530    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:18:45.302738    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:19:25.304452    4432 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 14:19:25.304790    4432 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 14:19:25.304810    4432 kubeadm.go:322] 
	I1212 14:19:25.304865    4432 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1212 14:19:25.304969    4432 kubeadm.go:322] 		timed out waiting for the condition
	I1212 14:19:25.304981    4432 kubeadm.go:322] 
	I1212 14:19:25.305020    4432 kubeadm.go:322] 	This error is likely caused by:
	I1212 14:19:25.305067    4432 kubeadm.go:322] 		- The kubelet is not running
	I1212 14:19:25.305186    4432 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 14:19:25.305207    4432 kubeadm.go:322] 
	I1212 14:19:25.305331    4432 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 14:19:25.305367    4432 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1212 14:19:25.305411    4432 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1212 14:19:25.305418    4432 kubeadm.go:322] 
	I1212 14:19:25.305556    4432 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 14:19:25.305708    4432 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 14:19:25.305722    4432 kubeadm.go:322] 
	I1212 14:19:25.305833    4432 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1212 14:19:25.305893    4432 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1212 14:19:25.305979    4432 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1212 14:19:25.306013    4432 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1212 14:19:25.306022    4432 kubeadm.go:322] 
	I1212 14:19:25.307089    4432 kubeadm.go:322] W1212 22:17:29.186942    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 14:19:25.307262    4432 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1212 14:19:25.307328    4432 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1212 14:19:25.307433    4432 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1212 14:19:25.307560    4432 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 14:19:25.307667    4432 kubeadm.go:322] W1212 22:17:30.292031    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 14:19:25.307774    4432 kubeadm.go:322] W1212 22:17:30.292735    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 14:19:25.307836    4432 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 14:19:25.307896    4432 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1212 14:19:25.307917    4432 kubeadm.go:406] StartCluster complete in 3m54.11633076s
	I1212 14:19:25.308018    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1212 14:19:25.325589    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.325603    4432 logs.go:286] No container was found matching "kube-apiserver"
	I1212 14:19:25.325669    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1212 14:19:25.342175    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.342190    4432 logs.go:286] No container was found matching "etcd"
	I1212 14:19:25.342268    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1212 14:19:25.359595    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.359609    4432 logs.go:286] No container was found matching "coredns"
	I1212 14:19:25.359689    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1212 14:19:25.377880    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.377893    4432 logs.go:286] No container was found matching "kube-scheduler"
	I1212 14:19:25.377961    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1212 14:19:25.396553    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.396568    4432 logs.go:286] No container was found matching "kube-proxy"
	I1212 14:19:25.396640    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1212 14:19:25.415188    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.415202    4432 logs.go:286] No container was found matching "kube-controller-manager"
	I1212 14:19:25.415265    4432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1212 14:19:25.433624    4432 logs.go:284] 0 containers: []
	W1212 14:19:25.433639    4432 logs.go:286] No container was found matching "kindnet"
	I1212 14:19:25.433648    4432 logs.go:123] Gathering logs for dmesg ...
	I1212 14:19:25.433659    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 14:19:25.446110    4432 logs.go:123] Gathering logs for describe nodes ...
	I1212 14:19:25.446128    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 14:19:25.501669    4432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 14:19:25.501681    4432 logs.go:123] Gathering logs for Docker ...
	I1212 14:19:25.501693    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1212 14:19:25.516608    4432 logs.go:123] Gathering logs for container status ...
	I1212 14:19:25.516622    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 14:19:25.566778    4432 logs.go:123] Gathering logs for kubelet ...
	I1212 14:19:25.566794    4432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 14:19:25.603134    4432 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:17:29.186942    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:17:30.292031    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:17:30.292735    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 14:19:25.603157    4432 out.go:239] * 
	* 
	W1212 14:19:25.603198    4432 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:17:29.186942    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:17:30.292031    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:17:30.292735    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:17:29.186942    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:17:30.292031    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:17:30.292735    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 14:19:25.603215    4432 out.go:239] * 
	* 
	W1212 14:19:25.604006    4432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 14:19:25.689403    4432 out.go:177] 
	W1212 14:19:25.731488    4432 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:17:29.186942    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:17:30.292031    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:17:30.292735    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1212 22:17:29.186942    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1212 22:17:30.292031    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1212 22:17:30.292735    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 14:19:25.731555    4432 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 14:19:25.731580    4432 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 14:19:25.753503    4432 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-299000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (260.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (105.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-299000 addons enable ingress --alsologtostderr -v=5
E1212 14:20:26.414794    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-299000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m44.711581282s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:19:25.916978    4675 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:19:25.917411    4675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:19:25.917417    4675 out.go:309] Setting ErrFile to fd 2...
	I1212 14:19:25.917421    4675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:19:25.917612    4675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:19:25.917984    4675 mustload.go:65] Loading cluster: ingress-addon-legacy-299000
	I1212 14:19:25.918284    4675 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:19:25.918300    4675 addons.go:594] checking whether the cluster is paused
	I1212 14:19:25.918384    4675 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:19:25.918400    4675 host.go:66] Checking if "ingress-addon-legacy-299000" exists ...
	I1212 14:19:25.918835    4675 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:19:25.970684    4675 ssh_runner.go:195] Run: systemctl --version
	I1212 14:19:25.970780    4675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:19:26.021224    4675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:19:26.106607    4675 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 14:19:26.145843    4675 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1212 14:19:26.166753    4675 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:19:26.166776    4675 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-299000"
	I1212 14:19:26.166787    4675 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-299000"
	I1212 14:19:26.166825    4675 host.go:66] Checking if "ingress-addon-legacy-299000" exists ...
	I1212 14:19:26.167222    4675 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:19:26.238631    4675 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1212 14:19:26.259752    4675 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1212 14:19:26.280682    4675 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1212 14:19:26.301741    4675 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1212 14:19:26.323814    4675 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 14:19:26.323833    4675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1212 14:19:26.323981    4675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:19:26.376066    4675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:19:26.473330    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:26.522963    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:26.522989    4675 retry.go:31] will retry after 365.360555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:26.889679    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:26.937214    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:26.937231    4675 retry.go:31] will retry after 240.118027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:27.178696    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:27.241422    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:27.241439    4675 retry.go:31] will retry after 550.448379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:27.794085    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:27.845079    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:27.845104    4675 retry.go:31] will retry after 526.562656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:28.372381    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:28.428894    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:28.428912    4675 retry.go:31] will retry after 1.433821944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:29.865072    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:29.916879    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:29.916902    4675 retry.go:31] will retry after 1.679585726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:31.597946    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:31.652227    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:31.652247    4675 retry.go:31] will retry after 3.48382354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:35.136598    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:35.187453    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:35.187471    4675 retry.go:31] will retry after 3.789245954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:38.977874    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:39.026140    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:39.026159    4675 retry.go:31] will retry after 3.570687155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:42.597077    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:42.656633    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:42.656650    4675 retry.go:31] will retry after 7.555699971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:50.213377    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:19:50.270912    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:19:50.270930    4675 retry.go:31] will retry after 21.438029976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:20:11.709893    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:20:11.774516    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:20:11.774534    4675 retry.go:31] will retry after 23.844078029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:20:35.619014    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:20:35.678503    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:20:35.678519    4675 retry.go:31] will retry after 34.727380799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:10.406261    4675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1212 14:21:10.453198    4675 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:10.453230    4675 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-299000"
	I1212 14:21:10.474760    4675 out.go:177] * Verifying ingress addon...
	I1212 14:21:10.496721    4675 out.go:177] 
	W1212 14:21:10.518709    4675 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-299000" does not exist: client config: context "ingress-addon-legacy-299000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-299000" does not exist: client config: context "ingress-addon-legacy-299000" does not exist]
	W1212 14:21:10.518738    4675 out.go:239] * 
	* 
	W1212 14:21:10.522165    4675 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 14:21:10.543509    4675 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-299000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-299000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139",
	        "Created": "2023-12-12T22:15:15.759257851Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:15:15.955144295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/hostname",
	        "HostsPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/hosts",
	        "LogPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139-json.log",
	        "Name": "/ingress-addon-legacy-299000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-299000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-299000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d-init/diff:/var/lib/docker/overlay2/e1667525af59d04335391b627b4b38c36536ed72e1a9af9b27a7accb0d45e601/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-299000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-299000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-299000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-299000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-299000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6b4509e38eddca294f25fa69289397eb7ef653f3ec3a8989a068419d96cfb40",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50496"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50497"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6b4509e38ed",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-299000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "27818a959fd1",
	                        "ingress-addon-legacy-299000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "479ec0412aba9e4f43dba85168b209185217edaeffd45cb55d79a1391048c920",
	                    "EndpointID": "5cb58e8c35006cdf4372577cbc34bf41bfc5ba47038a1e5d4007d5fcee6a280a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-299000 -n ingress-addon-legacy-299000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-299000 -n ingress-addon-legacy-299000: exit status 6 (378.777788ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:21:10.992016    4724 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-299000" does not appear in /Users/jenkins/minikube-integration/17761-876/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-299000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (105.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (110.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-299000 addons enable ingress-dns --alsologtostderr -v=5
E1212 14:22:15.832789    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:22:42.567719    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-299000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m50.153422321s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:21:11.056803    4734 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:21:11.057230    4734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:21:11.057236    4734 out.go:309] Setting ErrFile to fd 2...
	I1212 14:21:11.057240    4734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:21:11.057420    4734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:21:11.057787    4734 mustload.go:65] Loading cluster: ingress-addon-legacy-299000
	I1212 14:21:11.058084    4734 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:21:11.058101    4734 addons.go:594] checking whether the cluster is paused
	I1212 14:21:11.058185    4734 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:21:11.058201    4734 host.go:66] Checking if "ingress-addon-legacy-299000" exists ...
	I1212 14:21:11.058584    4734 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:21:11.108695    4734 ssh_runner.go:195] Run: systemctl --version
	I1212 14:21:11.108795    4734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:21:11.159747    4734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:21:11.246573    4734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 14:21:11.284680    4734 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1212 14:21:11.305700    4734 config.go:182] Loaded profile config "ingress-addon-legacy-299000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1212 14:21:11.305721    4734 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-299000"
	I1212 14:21:11.305731    4734 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-299000"
	I1212 14:21:11.305768    4734 host.go:66] Checking if "ingress-addon-legacy-299000" exists ...
	I1212 14:21:11.306186    4734 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-299000 --format={{.State.Status}}
	I1212 14:21:11.377312    4734 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1212 14:21:11.399647    4734 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1212 14:21:11.420845    4734 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 14:21:11.420876    4734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1212 14:21:11.421013    4734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-299000
	I1212 14:21:11.473239    4734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50498 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/ingress-addon-legacy-299000/id_rsa Username:docker}
	I1212 14:21:11.569635    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:11.679055    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:11.679085    4734 retry.go:31] will retry after 347.793386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:12.028667    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:12.075905    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:12.075932    4734 retry.go:31] will retry after 464.465522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:12.542148    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:12.602216    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:12.602233    4734 retry.go:31] will retry after 670.414299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:13.274910    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:13.337322    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:13.337360    4734 retry.go:31] will retry after 1.164830692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:14.502651    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:14.561151    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:14.561168    4734 retry.go:31] will retry after 1.084348857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:15.645675    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:15.695611    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:15.695631    4734 retry.go:31] will retry after 2.673095337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:18.369245    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:18.417637    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:18.417656    4734 retry.go:31] will retry after 2.71948409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:21.139333    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:21.197813    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:21.197840    4734 retry.go:31] will retry after 5.348547719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:26.546863    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:26.599241    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:26.599259    4734 retry.go:31] will retry after 8.060685222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:34.660162    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:34.715192    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:34.715210    4734 retry.go:31] will retry after 9.425871425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:44.141267    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:44.192236    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:44.192257    4734 retry.go:31] will retry after 14.888030102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:59.081705    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:21:59.183636    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:21:59.183654    4734 retry.go:31] will retry after 21.337437383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:22:20.521489    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:22:20.574355    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:22:20.574371    4734 retry.go:31] will retry after 40.433612992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:23:01.010524    4734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1212 14:23:01.064793    4734 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1212 14:23:01.085835    4734 out.go:177] 
	W1212 14:23:01.108825    4734 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1212 14:23:01.108891    4734 out.go:239] * 
	* 
	W1212 14:23:01.112232    4734 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 14:23:01.133747    4734 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-299000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-299000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139",
	        "Created": "2023-12-12T22:15:15.759257851Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:15:15.955144295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/hostname",
	        "HostsPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/hosts",
	        "LogPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139-json.log",
	        "Name": "/ingress-addon-legacy-299000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-299000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-299000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d-init/diff:/var/lib/docker/overlay2/e1667525af59d04335391b627b4b38c36536ed72e1a9af9b27a7accb0d45e601/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-299000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-299000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-299000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-299000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-299000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6b4509e38eddca294f25fa69289397eb7ef653f3ec3a8989a068419d96cfb40",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50496"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50497"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6b4509e38ed",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-299000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "27818a959fd1",
	                        "ingress-addon-legacy-299000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "479ec0412aba9e4f43dba85168b209185217edaeffd45cb55d79a1391048c920",
	                    "EndpointID": "5cb58e8c35006cdf4372577cbc34bf41bfc5ba47038a1e5d4007d5fcee6a280a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-299000 -n ingress-addon-legacy-299000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-299000 -n ingress-addon-legacy-299000: exit status 6 (372.012633ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:23:01.573746    4795 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-299000" does not appear in /Users/jenkins/minikube-integration/17761-876/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-299000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (110.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:200: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-299000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-299000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139",
	        "Created": "2023-12-12T22:15:15.759257851Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:15:15.955144295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/hostname",
	        "HostsPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/hosts",
	        "LogPath": "/var/lib/docker/containers/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139/27818a959fd1e1b514bb03ca349df94d2898815dfc02c75a8fc353d9bb0fa139-json.log",
	        "Name": "/ingress-addon-legacy-299000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-299000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-299000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d-init/diff:/var/lib/docker/overlay2/e1667525af59d04335391b627b4b38c36536ed72e1a9af9b27a7accb0d45e601/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e461721dfa6f7ee2420ef74e88d50667f74fa0217a26b9128bda37cc693c5a8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-299000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-299000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-299000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-299000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-299000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6b4509e38eddca294f25fa69289397eb7ef653f3ec3a8989a068419d96cfb40",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50496"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50497"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6b4509e38ed",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-299000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "27818a959fd1",
	                        "ingress-addon-legacy-299000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "479ec0412aba9e4f43dba85168b209185217edaeffd45cb55d79a1391048c920",
	                    "EndpointID": "5cb58e8c35006cdf4372577cbc34bf41bfc5ba47038a1e5d4007d5fcee6a280a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-299000 -n ingress-addon-legacy-299000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-299000 -n ingress-addon-legacy-299000: exit status 6 (371.752958ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:23:01.997518    4807 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-299000" does not appear in /Users/jenkins/minikube-integration/17761-876/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-299000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (893.11s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-242000 ssh -- ls /minikube-host
E1212 14:27:15.841441    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:27:42.582664    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:28:38.898513    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:32:15.848474    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:32:42.584396    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:34:05.633629    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:37:15.849449    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:37:42.586783    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-242000 ssh -- ls /minikube-host: signal: killed (14m52.681974901s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-242000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-242000
helpers_test.go:235: (dbg) docker inspect mount-start-1-242000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52e8e0e2305acd388dc7b0da5b2c807851374ae392a8deac9f412380ba794fa4",
	        "Created": "2023-12-12T22:26:40.919613007Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 96717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:26:41.145617554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/52e8e0e2305acd388dc7b0da5b2c807851374ae392a8deac9f412380ba794fa4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52e8e0e2305acd388dc7b0da5b2c807851374ae392a8deac9f412380ba794fa4/hostname",
	        "HostsPath": "/var/lib/docker/containers/52e8e0e2305acd388dc7b0da5b2c807851374ae392a8deac9f412380ba794fa4/hosts",
	        "LogPath": "/var/lib/docker/containers/52e8e0e2305acd388dc7b0da5b2c807851374ae392a8deac9f412380ba794fa4/52e8e0e2305acd388dc7b0da5b2c807851374ae392a8deac9f412380ba794fa4-json.log",
	        "Name": "/mount-start-1-242000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "mount-start-1-242000:/var",
	                "/host_mnt/Users:/minikube-host",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-242000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1076a3d5b7ba8210ffe9d3da8faffa619cab221c401030c891fffc450bc53077-init/diff:/var/lib/docker/overlay2/e1667525af59d04335391b627b4b38c36536ed72e1a9af9b27a7accb0d45e601/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1076a3d5b7ba8210ffe9d3da8faffa619cab221c401030c891fffc450bc53077/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1076a3d5b7ba8210ffe9d3da8faffa619cab221c401030c891fffc450bc53077/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1076a3d5b7ba8210ffe9d3da8faffa619cab221c401030c891fffc450bc53077/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-242000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-242000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-242000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-242000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-242000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad22c1960d98a8655bed9b0b66db9f06a03528e84effb4c7b7a8ed82c5effddf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50761"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50762"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50763"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50764"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50765"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ad22c1960d98",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-242000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "52e8e0e2305a",
	                        "mount-start-1-242000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "NetworkID": "ce78f94b4259c4af0d63ec5fff60232cfbcd1de198cc8ebe4464bcadaf4592da",
	                    "EndpointID": "aa1c324090a53585dc33eb9e5572b5f19c12ccb97928d54193c7f2328a9de628",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-242000 -n mount-start-1-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-242000 -n mount-start-1-242000: exit status 6 (372.050586ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:41:39.567381    6611 status.go:415] kubeconfig endpoint: extract IP: "mount-start-1-242000" does not appear in /Users/jenkins/minikube-integration/17761-876/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-242000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (893.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (757.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-499000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1212 14:45:18.904601    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:47:15.892932    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:47:42.631206    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:50:45.682365    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 14:52:15.895741    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:52:42.633163    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-499000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m37.150822774s)

                                                
                                                
-- stdout --
	* [multinode-499000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-499000 in cluster multinode-499000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-499000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:42:48.657984    6713 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:42:48.658239    6713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:42:48.658246    6713 out.go:309] Setting ErrFile to fd 2...
	I1212 14:42:48.658250    6713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:42:48.658425    6713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:42:48.660360    6713 out.go:303] Setting JSON to false
	I1212 14:42:48.683609    6713 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2538,"bootTime":1702418430,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:42:48.683745    6713 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:42:48.705845    6713 out.go:177] * [multinode-499000] minikube v1.32.0 on Darwin 14.2
	I1212 14:42:48.749309    6713 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 14:42:48.749342    6713 notify.go:220] Checking for updates...
	I1212 14:42:48.791455    6713 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:42:48.813542    6713 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:42:48.835371    6713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:42:48.856534    6713 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 14:42:48.877274    6713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 14:42:48.898753    6713 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:42:48.955149    6713 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:42:48.955313    6713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:42:49.056756    6713 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-12 22:42:49.046728417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:42:49.078466    6713 out.go:177] * Using the docker driver based on user configuration
	I1212 14:42:49.098972    6713 start.go:298] selected driver: docker
	I1212 14:42:49.099000    6713 start.go:902] validating driver "docker" against <nil>
	I1212 14:42:49.099020    6713 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 14:42:49.103556    6713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:42:49.203161    6713 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-12 22:42:49.19259875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:42:49.203376    6713 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 14:42:49.203566    6713 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 14:42:49.225142    6713 out.go:177] * Using Docker Desktop driver with root privileges
	I1212 14:42:49.246884    6713 cni.go:84] Creating CNI manager for ""
	I1212 14:42:49.246933    6713 cni.go:136] 0 nodes found, recommending kindnet
	I1212 14:42:49.246949    6713 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 14:42:49.246972    6713 start_flags.go:323] config:
	{Name:multinode-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:42:49.289830    6713 out.go:177] * Starting control plane node multinode-499000 in cluster multinode-499000
	I1212 14:42:49.310952    6713 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 14:42:49.332879    6713 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 14:42:49.374829    6713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:42:49.374885    6713 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 14:42:49.374898    6713 cache.go:56] Caching tarball of preloaded images
	I1212 14:42:49.374913    6713 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 14:42:49.375025    6713 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 14:42:49.375036    6713 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 14:42:49.375930    6713 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/multinode-499000/config.json ...
	I1212 14:42:49.375997    6713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/multinode-499000/config.json: {Name:mk2f35ec0431aaf3cc862568f235ba5f22c1dcc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:42:49.425223    6713 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 14:42:49.425240    6713 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 14:42:49.425259    6713 cache.go:194] Successfully downloaded all kic artifacts
	I1212 14:42:49.425314    6713 start.go:365] acquiring machines lock for multinode-499000: {Name:mk53f508a8f4d98fcd900a0ff67a1e257c9bcfa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:42:49.425465    6713 start.go:369] acquired machines lock for "multinode-499000" in 134.857µs
	I1212 14:42:49.425494    6713 start.go:93] Provisioning new machine with config: &{Name:multinode-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-499000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 14:42:49.425579    6713 start.go:125] createHost starting for "" (driver="docker")
	I1212 14:42:49.447041    6713 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 14:42:49.447502    6713 start.go:159] libmachine.API.Create for "multinode-499000" (driver="docker")
	I1212 14:42:49.447553    6713 client.go:168] LocalClient.Create starting
	I1212 14:42:49.447739    6713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 14:42:49.447838    6713 main.go:141] libmachine: Decoding PEM data...
	I1212 14:42:49.447876    6713 main.go:141] libmachine: Parsing certificate...
	I1212 14:42:49.447983    6713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 14:42:49.448054    6713 main.go:141] libmachine: Decoding PEM data...
	I1212 14:42:49.448071    6713 main.go:141] libmachine: Parsing certificate...
	I1212 14:42:49.449152    6713 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 14:42:49.499761    6713 cli_runner.go:211] docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 14:42:49.499855    6713 network_create.go:281] running [docker network inspect multinode-499000] to gather additional debugging logs...
	I1212 14:42:49.499873    6713 cli_runner.go:164] Run: docker network inspect multinode-499000
	W1212 14:42:49.549340    6713 cli_runner.go:211] docker network inspect multinode-499000 returned with exit code 1
	I1212 14:42:49.549381    6713 network_create.go:284] error running [docker network inspect multinode-499000]: docker network inspect multinode-499000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-499000 not found
	I1212 14:42:49.549392    6713 network_create.go:286] output of [docker network inspect multinode-499000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-499000 not found
	
	** /stderr **
	I1212 14:42:49.549513    6713 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 14:42:49.601120    6713 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:42:49.601514    6713 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022723e0}
	I1212 14:42:49.601528    6713 network_create.go:124] attempt to create docker network multinode-499000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1212 14:42:49.601593    6713 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	W1212 14:42:49.651922    6713 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000 returned with exit code 1
	W1212 14:42:49.651967    6713 network_create.go:149] failed to create docker network multinode-499000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1212 14:42:49.651984    6713 network_create.go:116] failed to create docker network multinode-499000 192.168.58.0/24, will retry: subnet is taken
	I1212 14:42:49.653390    6713 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:42:49.653758    6713 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002273490}
	I1212 14:42:49.653769    6713 network_create.go:124] attempt to create docker network multinode-499000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1212 14:42:49.653824    6713 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	I1212 14:42:49.738468    6713 network_create.go:108] docker network multinode-499000 192.168.67.0/24 created
	I1212 14:42:49.738509    6713 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-499000" container
	I1212 14:42:49.738625    6713 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 14:42:49.789341    6713 cli_runner.go:164] Run: docker volume create multinode-499000 --label name.minikube.sigs.k8s.io=multinode-499000 --label created_by.minikube.sigs.k8s.io=true
	I1212 14:42:49.840366    6713 oci.go:103] Successfully created a docker volume multinode-499000
	I1212 14:42:49.840492    6713 cli_runner.go:164] Run: docker run --rm --name multinode-499000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-499000 --entrypoint /usr/bin/test -v multinode-499000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 14:42:50.278419    6713 oci.go:107] Successfully prepared a docker volume multinode-499000
	I1212 14:42:50.278467    6713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:42:50.278480    6713 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 14:42:50.278570    6713 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-499000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 14:48:49.491394    6713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 14:48:49.491537    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:49.543968    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:49.544097    6713 retry.go:31] will retry after 285.44284ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:49.829781    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:49.884204    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:49.884319    6713 retry.go:31] will retry after 558.128407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:50.444867    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:50.495999    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:50.496107    6713 retry.go:31] will retry after 286.658818ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:50.784430    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:50.837563    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:50.837663    6713 retry.go:31] will retry after 625.080491ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:51.463135    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:51.514310    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 14:48:51.514411    6713 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:48:51.514429    6713 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:51.514491    6713 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 14:48:51.514544    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:51.563775    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:51.563867    6713 retry.go:31] will retry after 130.666105ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:51.697004    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:51.747827    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:51.747921    6713 retry.go:31] will retry after 293.765432ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:52.042145    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:52.094609    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:48:52.094697    6713 retry.go:31] will retry after 765.937724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:52.861324    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:48:52.917881    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 14:48:52.917987    6713 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:48:52.918005    6713 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:52.918019    6713 start.go:128] duration metric: createHost completed in 6m3.449010048s
	I1212 14:48:52.918025    6713 start.go:83] releasing machines lock for "multinode-499000", held for 6m3.449139619s
	W1212 14:48:52.918040    6713 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1212 14:48:52.918459    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:52.967914    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:48:52.967979    6713 delete.go:82] Unable to get host status for multinode-499000, assuming it has already been deleted: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	W1212 14:48:52.968079    6713 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1212 14:48:52.968090    6713 start.go:709] Will try again in 5 seconds ...
	I1212 14:48:57.969219    6713 start.go:365] acquiring machines lock for multinode-499000: {Name:mk53f508a8f4d98fcd900a0ff67a1e257c9bcfa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:48:57.969411    6713 start.go:369] acquired machines lock for "multinode-499000" in 141.783µs
	I1212 14:48:57.969447    6713 start.go:96] Skipping create...Using existing machine configuration
	I1212 14:48:57.969462    6713 fix.go:54] fixHost starting: 
	I1212 14:48:57.969924    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:58.023502    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:48:58.023545    6713 fix.go:102] recreateIfNeeded on multinode-499000: state= err=unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:58.023568    6713 fix.go:107] machineExists: false. err=machine does not exist
	I1212 14:48:58.067008    6713 out.go:177] * docker "multinode-499000" container is missing, will recreate.
	I1212 14:48:58.088916    6713 delete.go:124] DEMOLISHING multinode-499000 ...
	I1212 14:48:58.089097    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:58.141210    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 14:48:58.141265    6713 stop.go:75] unable to get state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:58.141300    6713 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:58.141679    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:58.191150    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:48:58.191209    6713 delete.go:82] Unable to get host status for multinode-499000, assuming it has already been deleted: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:58.191300    6713 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 14:48:58.240969    6713 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 14:48:58.241004    6713 kic.go:371] could not find the container multinode-499000 to remove it. will try anyways
	I1212 14:48:58.241086    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:58.291087    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 14:48:58.291140    6713 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:58.291228    6713 cli_runner.go:164] Run: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0"
	W1212 14:48:58.340752    6713 cli_runner.go:211] docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 14:48:58.340791    6713 oci.go:650] error shutdown multinode-499000: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:59.342467    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:59.393705    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:48:59.393749    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:59.393762    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:48:59.393787    6713 retry.go:31] will retry after 442.336952ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:59.837160    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:48:59.890483    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:48:59.890527    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:48:59.890538    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:48:59.890559    6713 retry.go:31] will retry after 1.010537949s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:00.901479    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:49:00.953709    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:49:00.953753    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:00.953764    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:49:00.953786    6713 retry.go:31] will retry after 685.067373ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:01.641066    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:49:01.696571    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:49:01.696615    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:01.696625    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:49:01.696649    6713 retry.go:31] will retry after 1.581477653s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:03.279029    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:49:03.333272    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:49:03.333319    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:03.333330    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:49:03.333351    6713 retry.go:31] will retry after 2.446456443s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:05.780694    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:49:05.836118    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:49:05.836162    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:05.836172    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:49:05.836203    6713 retry.go:31] will retry after 5.079941762s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:10.917594    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:49:10.970845    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:49:10.970889    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:10.970906    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:49:10.970931    6713 retry.go:31] will retry after 7.393904724s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:18.365558    6713 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:49:18.419421    6713 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:49:18.419477    6713 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:49:18.419490    6713 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:49:18.419517    6713 oci.go:88] couldn't shut down multinode-499000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	 
	I1212 14:49:18.419589    6713 cli_runner.go:164] Run: docker rm -f -v multinode-499000
	I1212 14:49:18.469842    6713 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 14:49:18.520068    6713 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 14:49:18.520185    6713 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 14:49:18.570556    6713 cli_runner.go:164] Run: docker network rm multinode-499000
	I1212 14:49:18.666091    6713 fix.go:114] Sleeping 1 second for extra luck!
	I1212 14:49:19.666482    6713 start.go:125] createHost starting for "" (driver="docker")
	I1212 14:49:19.688383    6713 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 14:49:19.688529    6713 start.go:159] libmachine.API.Create for "multinode-499000" (driver="docker")
	I1212 14:49:19.688576    6713 client.go:168] LocalClient.Create starting
	I1212 14:49:19.688753    6713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 14:49:19.688828    6713 main.go:141] libmachine: Decoding PEM data...
	I1212 14:49:19.688854    6713 main.go:141] libmachine: Parsing certificate...
	I1212 14:49:19.688922    6713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 14:49:19.688976    6713 main.go:141] libmachine: Decoding PEM data...
	I1212 14:49:19.688989    6713 main.go:141] libmachine: Parsing certificate...
	I1212 14:49:19.710767    6713 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 14:49:19.762698    6713 cli_runner.go:211] docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 14:49:19.762807    6713 network_create.go:281] running [docker network inspect multinode-499000] to gather additional debugging logs...
	I1212 14:49:19.762828    6713 cli_runner.go:164] Run: docker network inspect multinode-499000
	W1212 14:49:19.812772    6713 cli_runner.go:211] docker network inspect multinode-499000 returned with exit code 1
	I1212 14:49:19.812803    6713 network_create.go:284] error running [docker network inspect multinode-499000]: docker network inspect multinode-499000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-499000 not found
	I1212 14:49:19.812814    6713 network_create.go:286] output of [docker network inspect multinode-499000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-499000 not found
	
	** /stderr **
	I1212 14:49:19.812961    6713 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 14:49:19.864060    6713 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:49:19.865634    6713 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:49:19.867240    6713 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:49:19.867676    6713 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d900}
	I1212 14:49:19.867694    6713 network_create.go:124] attempt to create docker network multinode-499000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1212 14:49:19.867760    6713 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	I1212 14:49:19.991447    6713 network_create.go:108] docker network multinode-499000 192.168.76.0/24 created
	I1212 14:49:19.991815    6713 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-499000" container
	I1212 14:49:19.991925    6713 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 14:49:20.042731    6713 cli_runner.go:164] Run: docker volume create multinode-499000 --label name.minikube.sigs.k8s.io=multinode-499000 --label created_by.minikube.sigs.k8s.io=true
	I1212 14:49:20.092132    6713 oci.go:103] Successfully created a docker volume multinode-499000
	I1212 14:49:20.092261    6713 cli_runner.go:164] Run: docker run --rm --name multinode-499000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-499000 --entrypoint /usr/bin/test -v multinode-499000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 14:49:20.394861    6713 oci.go:107] Successfully prepared a docker volume multinode-499000
	I1212 14:49:20.394893    6713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:49:20.394905    6713 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 14:49:20.395005    6713 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-499000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 14:55:19.693559    6713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 14:55:19.693683    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:19.746639    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:19.746752    6713 retry.go:31] will retry after 344.735624ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:20.092020    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:20.144987    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:20.145111    6713 retry.go:31] will retry after 250.629851ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:20.398110    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:20.449957    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:20.450056    6713 retry.go:31] will retry after 649.309663ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:21.101726    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:21.154198    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 14:55:21.154322    6713 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:55:21.154343    6713 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:21.154401    6713 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 14:55:21.154461    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:21.204361    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:21.204478    6713 retry.go:31] will retry after 127.358546ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:21.333366    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:21.388830    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:21.388929    6713 retry.go:31] will retry after 215.459046ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:21.606677    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:21.658546    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:21.658650    6713 retry.go:31] will retry after 540.59511ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:22.199644    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:22.253822    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:22.253928    6713 retry.go:31] will retry after 718.208813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:22.972470    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:23.023149    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 14:55:23.023260    6713 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:55:23.023282    6713 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:23.023296    6713 start.go:128] duration metric: createHost completed in 6m3.353629006s
	I1212 14:55:23.023372    6713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 14:55:23.023423    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:23.073570    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:23.073668    6713 retry.go:31] will retry after 154.688919ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:23.228573    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:23.280956    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:23.281049    6713 retry.go:31] will retry after 419.494556ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:23.702486    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:23.755515    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:23.755605    6713 retry.go:31] will retry after 380.656096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:24.136543    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:24.191795    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 14:55:24.191901    6713 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:55:24.191918    6713 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:24.191980    6713 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 14:55:24.192032    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:24.242542    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:24.242629    6713 retry.go:31] will retry after 186.398387ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:24.430046    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:24.483692    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:24.483792    6713 retry.go:31] will retry after 429.420606ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:24.913813    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:24.965703    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 14:55:24.965792    6713 retry.go:31] will retry after 626.569858ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:25.594711    6713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 14:55:25.647489    6713 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 14:55:25.647586    6713 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:55:25.647603    6713 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:55:25.647621    6713 fix.go:56] fixHost completed within 6m27.674795066s
	I1212 14:55:25.647639    6713 start.go:83] releasing machines lock for "multinode-499000", held for 6m27.67484663s
	W1212 14:55:25.647717    6713 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-499000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-499000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 14:55:25.690880    6713 out.go:177] 
	W1212 14:55:25.713057    6713 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 14:55:25.713114    6713 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1212 14:55:25.713146    6713 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1212 14:55:25.734902    6713 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-499000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.629765ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:55:25.970713    7027 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (757.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (91.216873ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-499000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- rollout status deployment/busybox: exit status 1 (92.067817ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.656726ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.872102ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.74647ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.55113ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.18282ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.910331ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.983169ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.520619ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.531295ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.75852ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.622203ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (91.824165ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- exec  -- nslookup kubernetes.io: exit status 1 (92.684387ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- exec  -- nslookup kubernetes.default: exit status 1 (92.563184ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (92.593366ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.152164ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:56:59.797766    7106 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-499000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (91.756835ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-499000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.257248ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:00.094733    7115 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-499000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-499000 -v 3 --alsologtostderr: exit status 80 (204.467644ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:57:00.150139    7119 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:57:00.150458    7119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:00.150464    7119 out.go:309] Setting ErrFile to fd 2...
	I1212 14:57:00.150468    7119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:00.150648    7119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:57:00.150995    7119 mustload.go:65] Loading cluster: multinode-499000
	I1212 14:57:00.151311    7119 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:57:00.151693    7119 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:00.201599    7119 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:00.223961    7119 out.go:177] 
	W1212 14:57:00.245759    7119 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 14:57:00.245786    7119 out.go:239] * 
	* 
	W1212 14:57:00.249175    7119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 14:57:00.270618    7119 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-499000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (106.987906ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:00.461038    7125 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-499000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-499000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (35.471455ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-499000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-499000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-499000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (105.809737ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:00.656515    7132 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-499000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-242000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-499000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-499000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-499000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (106.099655ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:00.995353    7144 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status --output json --alsologtostderr: exit status 7 (106.433276ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-499000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:57:01.050595    7148 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:57:01.050808    7148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:01.050814    7148 out.go:309] Setting ErrFile to fd 2...
	I1212 14:57:01.050818    7148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:01.051004    7148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:57:01.051196    7148 out.go:303] Setting JSON to true
	I1212 14:57:01.051224    7148 mustload.go:65] Loading cluster: multinode-499000
	I1212 14:57:01.051257    7148 notify.go:220] Checking for updates...
	I1212 14:57:01.051517    7148 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:57:01.051529    7148 status.go:255] checking status of multinode-499000 ...
	I1212 14:57:01.051921    7148 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:01.101967    7148 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:01.102024    7148 status.go:330] multinode-499000 host status = "" (err=state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	)
	I1212 14:57:01.102042    7148 status.go:257] multinode-499000 status: &{Name:multinode-499000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1212 14:57:01.102061    7148 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 14:57:01.102069    7148 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-499000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.789399ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:01.264043    7154 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 node stop m03: exit status 85 (145.803434ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-499000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status: exit status 7 (107.270208ms)

                                                
                                                
-- stdout --
	multinode-499000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:01.518398    7160 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 14:57:01.518409    7160 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr: exit status 7 (108.778455ms)

                                                
                                                
-- stdout --
	multinode-499000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:57:01.574498    7164 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:57:01.574802    7164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:01.574807    7164 out.go:309] Setting ErrFile to fd 2...
	I1212 14:57:01.574811    7164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:01.575002    7164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:57:01.575185    7164 out.go:303] Setting JSON to false
	I1212 14:57:01.575207    7164 mustload.go:65] Loading cluster: multinode-499000
	I1212 14:57:01.575252    7164 notify.go:220] Checking for updates...
	I1212 14:57:01.575503    7164 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:57:01.575516    7164 status.go:255] checking status of multinode-499000 ...
	I1212 14:57:01.575997    7164 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:01.627180    7164 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:01.627240    7164 status.go:330] multinode-499000 host status = "" (err=state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	)
	I1212 14:57:01.627257    7164 status.go:257] multinode-499000 status: &{Name:multinode-499000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1212 14:57:01.627275    7164 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 14:57:01.627284    7164 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr": multinode-499000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:261: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr": multinode-499000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:265: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr": multinode-499000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.76938ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:01.789106    7170 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 node start m03 --alsologtostderr: exit status 85 (145.68125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:57:01.899979    7176 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:57:01.900309    7176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:01.900315    7176 out.go:309] Setting ErrFile to fd 2...
	I1212 14:57:01.900320    7176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:01.900499    7176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:57:01.900852    7176 mustload.go:65] Loading cluster: multinode-499000
	I1212 14:57:01.901136    7176 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:57:01.922076    7176 out.go:177] 
	W1212 14:57:01.942959    7176 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1212 14:57:01.942984    7176 out.go:239] * 
	* 
	W1212 14:57:01.946827    7176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 14:57:01.968069    7176 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1212 14:57:01.899979    7176 out.go:296] Setting OutFile to fd 1 ...
I1212 14:57:01.900309    7176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:57:01.900315    7176 out.go:309] Setting ErrFile to fd 2...
I1212 14:57:01.900320    7176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:57:01.900499    7176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
I1212 14:57:01.900852    7176 mustload.go:65] Loading cluster: multinode-499000
I1212 14:57:01.901136    7176 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:57:01.922076    7176 out.go:177] 
W1212 14:57:01.942959    7176 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1212 14:57:01.942984    7176 out.go:239] * 
* 
W1212 14:57:01.946827    7176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 14:57:01.968069    7176 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-499000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status: exit status 7 (107.522321ms)

                                                
                                                
-- stdout --
	multinode-499000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:02.098550    7178 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 14:57:02.098561    7178 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-499000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "a99aece8616fa95bbe154cff27932652fc6ce8cfd465be1d63a05d8982636843",
	        "Created": "2023-12-12T22:49:19.953273474Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.312984ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 14:57:02.259954    7184 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (794.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-499000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-499000
E1212 14:57:15.898600    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-499000: exit status 82 (18.049811455s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-499000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-499000" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-499000 --wait=true -v=8 --alsologtostderr
E1212 14:57:42.634773    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:01:59.081326    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:02:16.028142    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:02:42.764336    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:07:16.036539    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:07:25.824690    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:07:42.773258    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-499000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m55.895412698s)

                                                
                                                
-- stdout --
	* [multinode-499000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-499000 in cluster multinode-499000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* docker "multinode-499000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-499000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:57:20.422945    7206 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:57:20.423235    7206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:20.423241    7206 out.go:309] Setting ErrFile to fd 2...
	I1212 14:57:20.423245    7206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:57:20.423439    7206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:57:20.424791    7206 out.go:303] Setting JSON to false
	I1212 14:57:20.447533    7206 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3410,"bootTime":1702418430,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:57:20.447628    7206 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:57:20.469108    7206 out.go:177] * [multinode-499000] minikube v1.32.0 on Darwin 14.2
	I1212 14:57:20.511044    7206 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 14:57:20.511160    7206 notify.go:220] Checking for updates...
	I1212 14:57:20.554846    7206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:57:20.575838    7206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:57:20.597007    7206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:57:20.619921    7206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 14:57:20.661834    7206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 14:57:20.683687    7206 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:57:20.683888    7206 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:57:20.740736    7206 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:57:20.740888    7206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:57:20.840839    7206 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-12-12 22:57:20.831032258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:57:20.862628    7206 out.go:177] * Using the docker driver based on existing profile
	I1212 14:57:20.884405    7206 start.go:298] selected driver: docker
	I1212 14:57:20.884449    7206 start.go:902] validating driver "docker" against &{Name:multinode-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-499000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:57:20.884555    7206 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 14:57:20.884765    7206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:57:20.984529    7206 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-12-12 22:57:20.97500079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:57:20.987652    7206 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 14:57:20.987718    7206 cni.go:84] Creating CNI manager for ""
	I1212 14:57:20.987727    7206 cni.go:136] 1 nodes found, recommending kindnet
	I1212 14:57:20.987736    7206 start_flags.go:323] config:
	{Name:multinode-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:57:21.031283    7206 out.go:177] * Starting control plane node multinode-499000 in cluster multinode-499000
	I1212 14:57:21.053386    7206 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 14:57:21.097296    7206 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 14:57:21.118265    7206 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:57:21.118353    7206 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 14:57:21.118373    7206 cache.go:56] Caching tarball of preloaded images
	I1212 14:57:21.118359    7206 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 14:57:21.118575    7206 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 14:57:21.118594    7206 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 14:57:21.118751    7206 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/multinode-499000/config.json ...
	I1212 14:57:21.170010    7206 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 14:57:21.170036    7206 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 14:57:21.170059    7206 cache.go:194] Successfully downloaded all kic artifacts
	I1212 14:57:21.170104    7206 start.go:365] acquiring machines lock for multinode-499000: {Name:mk53f508a8f4d98fcd900a0ff67a1e257c9bcfa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 14:57:21.170194    7206 start.go:369] acquired machines lock for "multinode-499000" in 71.836µs
	I1212 14:57:21.170220    7206 start.go:96] Skipping create...Using existing machine configuration
	I1212 14:57:21.170228    7206 fix.go:54] fixHost starting: 
	I1212 14:57:21.170477    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:21.220302    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:21.220367    7206 fix.go:102] recreateIfNeeded on multinode-499000: state= err=unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:21.220387    7206 fix.go:107] machineExists: false. err=machine does not exist
	I1212 14:57:21.242079    7206 out.go:177] * docker "multinode-499000" container is missing, will recreate.
	I1212 14:57:21.284729    7206 delete.go:124] DEMOLISHING multinode-499000 ...
	I1212 14:57:21.284955    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:21.336033    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 14:57:21.336078    7206 stop.go:75] unable to get state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:21.336105    7206 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:21.336449    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:21.385963    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:21.386010    7206 delete.go:82] Unable to get host status for multinode-499000, assuming it has already been deleted: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:21.386087    7206 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 14:57:21.435625    7206 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 14:57:21.435665    7206 kic.go:371] could not find the container multinode-499000 to remove it. will try anyways
	I1212 14:57:21.435743    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:21.485162    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 14:57:21.485204    7206 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:21.485282    7206 cli_runner.go:164] Run: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0"
	W1212 14:57:21.535025    7206 cli_runner.go:211] docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 14:57:21.535054    7206 oci.go:650] error shutdown multinode-499000: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:22.535568    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:22.589898    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:22.589946    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:22.589954    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:22.589993    7206 retry.go:31] will retry after 640.838709ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:23.232283    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:23.285594    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:23.285635    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:23.285647    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:23.285673    7206 retry.go:31] will retry after 942.364969ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:24.229866    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:24.283397    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:24.283438    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:24.283446    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:24.283475    7206 retry.go:31] will retry after 696.790618ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:24.982620    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:25.036370    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:25.036415    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:25.036424    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:25.036449    7206 retry.go:31] will retry after 938.335001ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:25.977130    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:26.031535    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:26.031580    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:26.031594    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:26.031620    7206 retry.go:31] will retry after 2.972328327s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:29.004589    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:29.056532    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:29.056574    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:29.056584    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:29.056609    7206 retry.go:31] will retry after 5.111598315s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:34.169895    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:34.223698    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:34.223742    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:34.223751    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:34.223776    7206 retry.go:31] will retry after 6.675692331s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:40.900286    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 14:57:40.952517    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 14:57:40.952559    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 14:57:40.952567    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 14:57:40.952596    7206 oci.go:88] couldn't shut down multinode-499000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	 
	I1212 14:57:40.952670    7206 cli_runner.go:164] Run: docker rm -f -v multinode-499000
	I1212 14:57:41.002837    7206 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 14:57:41.052410    7206 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 14:57:41.052519    7206 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 14:57:41.102340    7206 cli_runner.go:164] Run: docker network rm multinode-499000
	I1212 14:57:41.200707    7206 fix.go:114] Sleeping 1 second for extra luck!
	I1212 14:57:42.201705    7206 start.go:125] createHost starting for "" (driver="docker")
	I1212 14:57:42.225080    7206 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 14:57:42.225304    7206 start.go:159] libmachine.API.Create for "multinode-499000" (driver="docker")
	I1212 14:57:42.225360    7206 client.go:168] LocalClient.Create starting
	I1212 14:57:42.225534    7206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 14:57:42.225623    7206 main.go:141] libmachine: Decoding PEM data...
	I1212 14:57:42.225659    7206 main.go:141] libmachine: Parsing certificate...
	I1212 14:57:42.225763    7206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 14:57:42.225855    7206 main.go:141] libmachine: Decoding PEM data...
	I1212 14:57:42.225877    7206 main.go:141] libmachine: Parsing certificate...
	I1212 14:57:42.246890    7206 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 14:57:42.297871    7206 cli_runner.go:211] docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 14:57:42.297957    7206 network_create.go:281] running [docker network inspect multinode-499000] to gather additional debugging logs...
	I1212 14:57:42.297974    7206 cli_runner.go:164] Run: docker network inspect multinode-499000
	W1212 14:57:42.347718    7206 cli_runner.go:211] docker network inspect multinode-499000 returned with exit code 1
	I1212 14:57:42.347759    7206 network_create.go:284] error running [docker network inspect multinode-499000]: docker network inspect multinode-499000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-499000 not found
	I1212 14:57:42.347771    7206 network_create.go:286] output of [docker network inspect multinode-499000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-499000 not found
	
	** /stderr **
	I1212 14:57:42.347914    7206 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 14:57:42.400502    7206 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:57:42.400926    7206 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002582870}
	I1212 14:57:42.400941    7206 network_create.go:124] attempt to create docker network multinode-499000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1212 14:57:42.401009    7206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	W1212 14:57:42.451758    7206 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000 returned with exit code 1
	W1212 14:57:42.451797    7206 network_create.go:149] failed to create docker network multinode-499000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1212 14:57:42.451815    7206 network_create.go:116] failed to create docker network multinode-499000 192.168.58.0/24, will retry: subnet is taken
	I1212 14:57:42.453326    7206 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 14:57:42.453705    7206 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00244efd0}
	I1212 14:57:42.453720    7206 network_create.go:124] attempt to create docker network multinode-499000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1212 14:57:42.453787    7206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	I1212 14:57:42.538503    7206 network_create.go:108] docker network multinode-499000 192.168.67.0/24 created
	I1212 14:57:42.538545    7206 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-499000" container
	I1212 14:57:42.538651    7206 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 14:57:42.589313    7206 cli_runner.go:164] Run: docker volume create multinode-499000 --label name.minikube.sigs.k8s.io=multinode-499000 --label created_by.minikube.sigs.k8s.io=true
	I1212 14:57:42.638653    7206 oci.go:103] Successfully created a docker volume multinode-499000
	I1212 14:57:42.638759    7206 cli_runner.go:164] Run: docker run --rm --name multinode-499000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-499000 --entrypoint /usr/bin/test -v multinode-499000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 14:57:42.930345    7206 oci.go:107] Successfully prepared a docker volume multinode-499000
	I1212 14:57:42.930382    7206 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:57:42.930396    7206 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 14:57:42.930491    7206 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-499000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:03:42.357902    7206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:03:42.358036    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:42.414106    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:42.414217    7206 retry.go:31] will retry after 277.078095ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:42.692890    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:42.747063    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:42.747173    7206 retry.go:31] will retry after 497.440997ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:43.244914    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:43.295954    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:43.296068    7206 retry.go:31] will retry after 449.143494ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:43.747602    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:43.799727    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:03:43.799841    7206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:03:43.799859    7206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:43.799918    7206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:03:43.799980    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:43.849894    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:43.850002    7206 retry.go:31] will retry after 205.647043ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:44.057197    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:44.111640    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:44.111747    7206 retry.go:31] will retry after 560.66479ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:44.673989    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:44.727796    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:44.727889    7206 retry.go:31] will retry after 430.066239ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:45.160407    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:45.214453    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:03:45.214556    7206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:03:45.214573    7206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:45.214586    7206 start.go:128] duration metric: createHost completed in 6m2.881415102s
	I1212 15:03:45.214677    7206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:03:45.214729    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:45.266061    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:45.266151    7206 retry.go:31] will retry after 359.66238ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:45.626268    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:45.678414    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:45.678505    7206 retry.go:31] will retry after 522.852435ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:46.202561    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:46.255898    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:46.256000    7206 retry.go:31] will retry after 683.728457ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:46.940062    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:46.992852    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:03:46.992949    7206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:03:46.992969    7206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:46.993022    7206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:03:46.993081    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:47.042245    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:47.042337    7206 retry.go:31] will retry after 263.410138ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:47.308133    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:47.361954    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:47.362056    7206 retry.go:31] will retry after 407.142545ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:47.769594    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:47.823728    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:03:47.823817    7206 retry.go:31] will retry after 692.601466ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:48.516752    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:03:48.570704    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:03:48.570803    7206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:03:48.570821    7206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:48.570837    7206 fix.go:56] fixHost completed within 6m27.268886426s
	I1212 15:03:48.570843    7206 start.go:83] releasing machines lock for "multinode-499000", held for 6m27.26891763s
	W1212 15:03:48.570858    7206 start.go:694] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 15:03:48.570926    7206 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 15:03:48.570933    7206 start.go:709] Will try again in 5 seconds ...
	I1212 15:03:53.571259    7206 start.go:365] acquiring machines lock for multinode-499000: {Name:mk53f508a8f4d98fcd900a0ff67a1e257c9bcfa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:03:53.571443    7206 start.go:369] acquired machines lock for "multinode-499000" in 134.094µs
	I1212 15:03:53.571490    7206 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:03:53.571498    7206 fix.go:54] fixHost starting: 
	I1212 15:03:53.571941    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:53.628038    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:53.628082    7206 fix.go:102] recreateIfNeeded on multinode-499000: state= err=unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:53.628100    7206 fix.go:107] machineExists: false. err=machine does not exist
	I1212 15:03:53.650338    7206 out.go:177] * docker "multinode-499000" container is missing, will recreate.
	I1212 15:03:53.693586    7206 delete.go:124] DEMOLISHING multinode-499000 ...
	I1212 15:03:53.693763    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:53.744534    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 15:03:53.744577    7206 stop.go:75] unable to get state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:53.744603    7206 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:53.744971    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:53.794962    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:53.795013    7206 delete.go:82] Unable to get host status for multinode-499000, assuming it has already been deleted: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:53.795091    7206 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 15:03:53.844777    7206 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 15:03:53.844806    7206 kic.go:371] could not find the container multinode-499000 to remove it. will try anyways
	I1212 15:03:53.844883    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:53.894265    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 15:03:53.894309    7206 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:53.894389    7206 cli_runner.go:164] Run: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0"
	W1212 15:03:53.944202    7206 cli_runner.go:211] docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 15:03:53.944231    7206 oci.go:650] error shutdown multinode-499000: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:54.945059    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:54.997304    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:54.997349    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:54.997357    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:03:54.997378    7206 retry.go:31] will retry after 676.745224ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:55.675440    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:55.728133    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:55.728174    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:55.728182    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:03:55.728207    7206 retry.go:31] will retry after 685.42774ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:56.415864    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:56.469273    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:56.469319    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:56.469330    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:03:56.469355    7206 retry.go:31] will retry after 883.301612ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:57.354966    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:57.408337    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:57.408382    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:57.408392    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:03:57.408417    7206 retry.go:31] will retry after 1.069640177s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:58.478436    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:03:58.532295    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:03:58.532340    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:03:58.532348    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:03:58.532374    7206 retry.go:31] will retry after 3.104129438s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:04:01.638224    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:04:01.691672    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:04:01.691721    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:04:01.691729    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:04:01.691753    7206 retry.go:31] will retry after 3.001720157s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:04:04.693881    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:04:04.747247    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:04:04.747292    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:04:04.747304    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:04:04.747326    7206 retry.go:31] will retry after 3.432321963s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:04:08.181135    7206 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:04:08.234566    7206 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:04:08.234611    7206 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:04:08.234619    7206 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:04:08.234651    7206 oci.go:88] couldn't shut down multinode-499000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	 
	I1212 15:04:08.234731    7206 cli_runner.go:164] Run: docker rm -f -v multinode-499000
	I1212 15:04:08.285899    7206 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 15:04:08.335440    7206 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 15:04:08.335556    7206 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:04:08.384973    7206 cli_runner.go:164] Run: docker network rm multinode-499000
	I1212 15:04:08.500601    7206 fix.go:114] Sleeping 1 second for extra luck!
	I1212 15:04:09.501121    7206 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:04:09.524254    7206 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 15:04:09.524414    7206 start.go:159] libmachine.API.Create for "multinode-499000" (driver="docker")
	I1212 15:04:09.524452    7206 client.go:168] LocalClient.Create starting
	I1212 15:04:09.524655    7206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:04:09.524750    7206 main.go:141] libmachine: Decoding PEM data...
	I1212 15:04:09.524785    7206 main.go:141] libmachine: Parsing certificate...
	I1212 15:04:09.524877    7206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:04:09.524947    7206 main.go:141] libmachine: Decoding PEM data...
	I1212 15:04:09.524964    7206 main.go:141] libmachine: Parsing certificate...
	I1212 15:04:09.525764    7206 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:04:09.577088    7206 cli_runner.go:211] docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:04:09.577173    7206 network_create.go:281] running [docker network inspect multinode-499000] to gather additional debugging logs...
	I1212 15:04:09.577191    7206 cli_runner.go:164] Run: docker network inspect multinode-499000
	W1212 15:04:09.627261    7206 cli_runner.go:211] docker network inspect multinode-499000 returned with exit code 1
	I1212 15:04:09.627292    7206 network_create.go:284] error running [docker network inspect multinode-499000]: docker network inspect multinode-499000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-499000 not found
	I1212 15:04:09.627304    7206 network_create.go:286] output of [docker network inspect multinode-499000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-499000 not found
	
	** /stderr **
	I1212 15:04:09.627473    7206 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:04:09.679165    7206 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:04:09.680714    7206 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:04:09.682299    7206 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:04:09.682653    7206 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002583f80}
	I1212 15:04:09.682667    7206 network_create.go:124] attempt to create docker network multinode-499000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1212 15:04:09.682730    7206 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	I1212 15:04:09.768523    7206 network_create.go:108] docker network multinode-499000 192.168.76.0/24 created
	I1212 15:04:09.768554    7206 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-499000" container
	I1212 15:04:09.768659    7206 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:04:09.819257    7206 cli_runner.go:164] Run: docker volume create multinode-499000 --label name.minikube.sigs.k8s.io=multinode-499000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:04:09.869175    7206 oci.go:103] Successfully created a docker volume multinode-499000
	I1212 15:04:09.869315    7206 cli_runner.go:164] Run: docker run --rm --name multinode-499000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-499000 --entrypoint /usr/bin/test -v multinode-499000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:04:10.214557    7206 oci.go:107] Successfully prepared a docker volume multinode-499000
	I1212 15:04:10.214592    7206 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:04:10.214604    7206 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:04:10.214707    7206 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-499000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 15:10:09.536274    7206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:10:09.536406    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:09.589106    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:09.589207    7206 retry.go:31] will retry after 214.465368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:09.806131    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:09.859928    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:09.860053    7206 retry.go:31] will retry after 448.343525ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:10.309135    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:10.363197    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:10.363311    7206 retry.go:31] will retry after 659.857889ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:11.024557    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:11.080249    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:10:11.080376    7206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:10:11.080395    7206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:11.080447    7206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:10:11.080498    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:11.130471    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:11.130578    7206 retry.go:31] will retry after 131.77114ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:11.262709    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:11.312908    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:11.313007    7206 retry.go:31] will retry after 213.580623ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:11.526920    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:11.580256    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:11.580366    7206 retry.go:31] will retry after 569.03797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:12.149757    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:12.201616    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:12.201726    7206 retry.go:31] will retry after 876.101077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:13.078891    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:13.132834    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:10:13.132938    7206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:10:13.132956    7206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:13.132966    7206 start.go:128] duration metric: createHost completed in 6m3.620912514s
	I1212 15:10:13.133039    7206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 15:10:13.133098    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:13.184240    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:13.184336    7206 retry.go:31] will retry after 364.506615ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:13.551162    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:13.604674    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:13.604770    7206 retry.go:31] will retry after 542.483594ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:14.149647    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:14.202244    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:14.202343    7206 retry.go:31] will retry after 622.40258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:14.827140    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:14.880353    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:10:14.880447    7206 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:10:14.880466    7206 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:14.880527    7206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 15:10:14.880579    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:14.930928    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:14.931031    7206 retry.go:31] will retry after 158.39059ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:15.089781    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:15.143882    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:15.143977    7206 retry.go:31] will retry after 371.224744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:15.516660    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:15.590336    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	I1212 15:10:15.590457    7206 retry.go:31] will retry after 611.309969ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:16.202188    7206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000
	W1212 15:10:16.255890    7206 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000 returned with exit code 1
	W1212 15:10:16.255989    7206 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	W1212 15:10:16.256003    7206 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-499000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-499000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:16.256015    7206 fix.go:56] fixHost completed within 6m22.673038854s
	I1212 15:10:16.256025    7206 start.go:83] releasing machines lock for "multinode-499000", held for 6m22.67309086s
	W1212 15:10:16.256099    7206 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-499000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-499000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1212 15:10:16.299696    7206 out.go:177] 
	W1212 15:10:16.321716    7206 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1212 15:10:16.321770    7206 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1212 15:10:16.321798    7206 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1212 15:10:16.343640    7206 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-499000" : exit status 52
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-499000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "9a2028299e33934d544736714a9ac2b78adc1a59e47ddfe2a92a81183f5727f1",
	        "Created": "2023-12-12T23:04:09.729028746Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.569355ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:10:16.643190    7562 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (794.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 node delete m03: exit status 80 (197.307247ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-499000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr: exit status 7 (108.48206ms)

                                                
                                                
-- stdout --
	multinode-499000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:10:16.897729    7570 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:10:16.898038    7570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:16.898044    7570 out.go:309] Setting ErrFile to fd 2...
	I1212 15:10:16.898048    7570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:16.898233    7570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:10:16.898419    7570 out.go:303] Setting JSON to false
	I1212 15:10:16.898442    7570 mustload.go:65] Loading cluster: multinode-499000
	I1212 15:10:16.898489    7570 notify.go:220] Checking for updates...
	I1212 15:10:16.898725    7570 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:10:16.898738    7570 status.go:255] checking status of multinode-499000 ...
	I1212 15:10:16.899164    7570 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:16.949443    7570 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:16.949505    7570 status.go:330] multinode-499000 host status = "" (err=state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	)
	I1212 15:10:16.949529    7570 status.go:257] multinode-499000 status: &{Name:multinode-499000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1212 15:10:16.949546    7570 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 15:10:16.949554    7570 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "9a2028299e33934d544736714a9ac2b78adc1a59e47ddfe2a92a81183f5727f1",
	        "Created": "2023-12-12T23:04:09.729028746Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.442872ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:10:17.111199    7576 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (13.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 stop: exit status 82 (12.681105589s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	* Stopping node "multinode-499000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-499000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-499000 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status: exit status 7 (106.955646ms)

                                                
                                                
-- stdout --
	multinode-499000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:10:29.900122    7597 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 15:10:29.900135    7597 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr: exit status 7 (107.26064ms)

                                                
                                                
-- stdout --
	multinode-499000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:10:29.955817    7601 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:10:29.956114    7601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:29.956121    7601 out.go:309] Setting ErrFile to fd 2...
	I1212 15:10:29.956125    7601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:29.956309    7601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:10:29.956498    7601 out.go:303] Setting JSON to false
	I1212 15:10:29.956521    7601 mustload.go:65] Loading cluster: multinode-499000
	I1212 15:10:29.956555    7601 notify.go:220] Checking for updates...
	I1212 15:10:29.956806    7601 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:10:29.956819    7601 status.go:255] checking status of multinode-499000 ...
	I1212 15:10:29.957238    7601 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:30.007373    7601 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:30.007434    7601 status.go:330] multinode-499000 host status = "" (err=state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	)
	I1212 15:10:30.007454    7601 status.go:257] multinode-499000 status: &{Name:multinode-499000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1212 15:10:30.007470    7601 status.go:260] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	E1212 15:10:30.007477    7601 status.go:263] The "multinode-499000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr": multinode-499000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-499000 status --alsologtostderr": multinode-499000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "9a2028299e33934d544736714a9ac2b78adc1a59e47ddfe2a92a81183f5727f1",
	        "Created": "2023-12-12T23:04:09.729028746Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.051499ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:10:30.229123    7607 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (13.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (138.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-499000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1212 15:12:16.046354    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:12:42.783170    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-499000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m18.510941304s)

                                                
                                                
-- stdout --
	* [multinode-499000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-499000 in cluster multinode-499000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* docker "multinode-499000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 15:10:30.339757    7613 out.go:296] Setting OutFile to fd 1 ...
	I1212 15:10:30.340016    7613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:30.340022    7613 out.go:309] Setting ErrFile to fd 2...
	I1212 15:10:30.340026    7613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 15:10:30.340235    7613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 15:10:30.341780    7613 out.go:303] Setting JSON to false
	I1212 15:10:30.364520    7613 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4200,"bootTime":1702418430,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 15:10:30.364643    7613 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 15:10:30.386373    7613 out.go:177] * [multinode-499000] minikube v1.32.0 on Darwin 14.2
	I1212 15:10:30.450023    7613 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 15:10:30.428819    7613 notify.go:220] Checking for updates...
	I1212 15:10:30.471973    7613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 15:10:30.513926    7613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 15:10:30.534927    7613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 15:10:30.555949    7613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 15:10:30.576959    7613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 15:10:30.599610    7613 config.go:182] Loaded profile config "multinode-499000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 15:10:30.600296    7613 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 15:10:30.656180    7613 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 15:10:30.656323    7613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:10:30.757982    7613 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-12-12 23:10:30.747611275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:10:30.801522    7613 out.go:177] * Using the docker driver based on existing profile
	I1212 15:10:30.823607    7613 start.go:298] selected driver: docker
	I1212 15:10:30.823625    7613 start.go:902] validating driver "docker" against &{Name:multinode-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-499000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:10:30.823695    7613 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 15:10:30.823814    7613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 15:10:30.921786    7613 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-12-12 23:10:30.911728613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 15:10:30.924866    7613 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 15:10:30.924937    7613 cni.go:84] Creating CNI manager for ""
	I1212 15:10:30.924946    7613 cni.go:136] 1 nodes found, recommending kindnet
	I1212 15:10:30.924955    7613 start_flags.go:323] config:
	{Name:multinode-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 15:10:30.967734    7613 out.go:177] * Starting control plane node multinode-499000 in cluster multinode-499000
	I1212 15:10:30.989757    7613 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 15:10:31.032806    7613 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 15:10:31.054658    7613 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:10:31.054717    7613 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 15:10:31.054732    7613 cache.go:56] Caching tarball of preloaded images
	I1212 15:10:31.054727    7613 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 15:10:31.054900    7613 preload.go:174] Found /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 15:10:31.054916    7613 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 15:10:31.055035    7613 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/multinode-499000/config.json ...
	I1212 15:10:31.105204    7613 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 15:10:31.105227    7613 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 15:10:31.105257    7613 cache.go:194] Successfully downloaded all kic artifacts
	I1212 15:10:31.105316    7613 start.go:365] acquiring machines lock for multinode-499000: {Name:mk53f508a8f4d98fcd900a0ff67a1e257c9bcfa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 15:10:31.105405    7613 start.go:369] acquired machines lock for "multinode-499000" in 63.403µs
	I1212 15:10:31.105428    7613 start.go:96] Skipping create...Using existing machine configuration
	I1212 15:10:31.105439    7613 fix.go:54] fixHost starting: 
	I1212 15:10:31.105648    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:31.154933    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:31.154987    7613 fix.go:102] recreateIfNeeded on multinode-499000: state= err=unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:31.155016    7613 fix.go:107] machineExists: false. err=machine does not exist
	I1212 15:10:31.176712    7613 out.go:177] * docker "multinode-499000" container is missing, will recreate.
	I1212 15:10:31.220619    7613 delete.go:124] DEMOLISHING multinode-499000 ...
	I1212 15:10:31.220808    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:31.272562    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 15:10:31.272607    7613 stop.go:75] unable to get state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:31.272622    7613 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:31.272993    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:31.322392    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:31.322449    7613 delete.go:82] Unable to get host status for multinode-499000, assuming it has already been deleted: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:31.322525    7613 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 15:10:31.372567    7613 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 15:10:31.372603    7613 kic.go:371] could not find the container multinode-499000 to remove it. will try anyways
	I1212 15:10:31.372673    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:31.421938    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	W1212 15:10:31.421984    7613 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:31.422058    7613 cli_runner.go:164] Run: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0"
	W1212 15:10:31.472022    7613 cli_runner.go:211] docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 15:10:31.472053    7613 oci.go:650] error shutdown multinode-499000: docker exec --privileged -t multinode-499000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:32.473594    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:32.525289    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:32.525336    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:32.525347    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:32.525385    7613 retry.go:31] will retry after 439.181043ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:32.965371    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:33.019283    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:33.019331    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:33.019344    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:33.019370    7613 retry.go:31] will retry after 1.086280585s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:34.106412    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:34.160659    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:34.160711    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:34.160722    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:34.160755    7613 retry.go:31] will retry after 1.422906603s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:35.584618    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:35.640213    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:35.640257    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:35.640265    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:35.640290    7613 retry.go:31] will retry after 949.54805ms: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:36.591740    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:36.646109    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:36.646163    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:36.646177    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:36.646197    7613 retry.go:31] will retry after 1.593713649s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:38.240519    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:38.293141    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:38.293184    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:38.293197    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:38.293221    7613 retry.go:31] will retry after 3.403669179s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:41.699374    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:41.753473    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:41.753515    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:41.753524    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:41.753550    7613 retry.go:31] will retry after 3.906730549s: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:45.661614    7613 cli_runner.go:164] Run: docker container inspect multinode-499000 --format={{.State.Status}}
	W1212 15:10:45.713332    7613 cli_runner.go:211] docker container inspect multinode-499000 --format={{.State.Status}} returned with exit code 1
	I1212 15:10:45.713385    7613 oci.go:662] temporary error verifying shutdown: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	I1212 15:10:45.713395    7613 oci.go:664] temporary error: container multinode-499000 status is  but expect it to be exited
	I1212 15:10:45.713429    7613 oci.go:88] couldn't shut down multinode-499000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000
	 
	I1212 15:10:45.713495    7613 cli_runner.go:164] Run: docker rm -f -v multinode-499000
	I1212 15:10:45.764718    7613 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-499000
	W1212 15:10:45.814332    7613 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-499000 returned with exit code 1
	I1212 15:10:45.814447    7613 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:10:45.867238    7613 cli_runner.go:164] Run: docker network rm multinode-499000
	I1212 15:10:45.967640    7613 fix.go:114] Sleeping 1 second for extra luck!
	I1212 15:10:46.969841    7613 start.go:125] createHost starting for "" (driver="docker")
	I1212 15:10:46.993080    7613 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 15:10:46.993228    7613 start.go:159] libmachine.API.Create for "multinode-499000" (driver="docker")
	I1212 15:10:46.993294    7613 client.go:168] LocalClient.Create starting
	I1212 15:10:46.993479    7613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/ca.pem
	I1212 15:10:46.993532    7613 main.go:141] libmachine: Decoding PEM data...
	I1212 15:10:46.993552    7613 main.go:141] libmachine: Parsing certificate...
	I1212 15:10:46.993654    7613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17761-876/.minikube/certs/cert.pem
	I1212 15:10:46.993697    7613 main.go:141] libmachine: Decoding PEM data...
	I1212 15:10:46.993705    7613 main.go:141] libmachine: Parsing certificate...
	I1212 15:10:46.994162    7613 cli_runner.go:164] Run: docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 15:10:47.047625    7613 cli_runner.go:211] docker network inspect multinode-499000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 15:10:47.047727    7613 network_create.go:281] running [docker network inspect multinode-499000] to gather additional debugging logs...
	I1212 15:10:47.047743    7613 cli_runner.go:164] Run: docker network inspect multinode-499000
	W1212 15:10:47.098707    7613 cli_runner.go:211] docker network inspect multinode-499000 returned with exit code 1
	I1212 15:10:47.098738    7613 network_create.go:284] error running [docker network inspect multinode-499000]: docker network inspect multinode-499000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-499000 not found
	I1212 15:10:47.098749    7613 network_create.go:286] output of [docker network inspect multinode-499000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-499000 not found
	
	** /stderr **
	I1212 15:10:47.098863    7613 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 15:10:47.150256    7613 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:10:47.150649    7613 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020d5f40}
	I1212 15:10:47.150668    7613 network_create.go:124] attempt to create docker network multinode-499000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1212 15:10:47.150743    7613 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	W1212 15:10:47.200870    7613 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000 returned with exit code 1
	W1212 15:10:47.200917    7613 network_create.go:149] failed to create docker network multinode-499000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1212 15:10:47.200938    7613 network_create.go:116] failed to create docker network multinode-499000 192.168.58.0/24, will retry: subnet is taken
	I1212 15:10:47.202580    7613 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1212 15:10:47.202951    7613 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025253d0}
	I1212 15:10:47.202972    7613 network_create.go:124] attempt to create docker network multinode-499000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1212 15:10:47.203035    7613 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-499000 multinode-499000
	I1212 15:10:47.288696    7613 network_create.go:108] docker network multinode-499000 192.168.67.0/24 created
	I1212 15:10:47.288740    7613 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-499000" container
	I1212 15:10:47.288848    7613 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 15:10:47.339975    7613 cli_runner.go:164] Run: docker volume create multinode-499000 --label name.minikube.sigs.k8s.io=multinode-499000 --label created_by.minikube.sigs.k8s.io=true
	I1212 15:10:47.390041    7613 oci.go:103] Successfully created a docker volume multinode-499000
	I1212 15:10:47.390146    7613 cli_runner.go:164] Run: docker run --rm --name multinode-499000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-499000 --entrypoint /usr/bin/test -v multinode-499000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 15:10:47.680287    7613 oci.go:107] Successfully prepared a docker volume multinode-499000
	I1212 15:10:47.680322    7613 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 15:10:47.680336    7613 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 15:10:47.680437    7613 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-499000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-499000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-499000
helpers_test.go:235: (dbg) docker inspect multinode-499000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-499000",
	        "Id": "ffac403ccee8feedd8ba48a984cf7041ac2dd6708996740f9f46dc4c2ce41a3a",
	        "Created": "2023-12-12T23:10:47.249541064Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-499000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-499000 -n multinode-499000: exit status 7 (107.87746ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:12:48.969412    7730 status.go:249] status error: host: state: unknown state "multinode-499000": docker container inspect multinode-499000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-499000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-499000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (138.74s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-807000 --memory=2048 --driver=docker 
E1212 15:17:16.055088    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:17:42.791323    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:18:39.111876    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-807000 --memory=2048 --driver=docker : signal: killed (5m0.005096288s)

                                                
                                                
-- stdout --
	* [scheduled-stop-807000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-807000 in cluster scheduled-stop-807000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-807000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-807000 in cluster scheduled-stop-807000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-12-12 15:20:30.774317 -0800 PST m=+4676.980511967
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-807000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-807000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-807000",
	        "Id": "356c16f72836d28940de3d2961698374d1deca083808a4d6fee1cd3c59dadb3a",
	        "Created": "2023-12-12T23:15:32.071745876Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-807000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-807000 -n scheduled-stop-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-807000 -n scheduled-stop-807000: exit status 7 (109.352151ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:20:30.939103    8249 status.go:249] status error: host: state: unknown state "scheduled-stop-807000": docker container inspect scheduled-stop-807000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-807000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-807000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-807000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.89s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1349809969 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-257000 --memory=2600 --driver=docker 
E1212 15:22:15.930291    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:22:42.663394    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
E1212 15:24:05.711479    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-257000 --memory=2600 --driver=docker : signal: killed (4m57.380745543s)

                                                
                                                
-- stdout --
	* [skaffold-257000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-257000 in cluster skaffold-257000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-257000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-257000 in cluster skaffold-257000
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-12-12 15:25:31.645994 -0800 PST m=+4977.870246203
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-257000
helpers_test.go:235: (dbg) docker inspect skaffold-257000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-257000",
	        "Id": "ec300543422bfa8020759b997764f307e4cb4ef0765df756d0c63dc20f382a9f",
	        "Created": "2023-12-12T23:20:35.370851912Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-257000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-257000 -n skaffold-257000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-257000 -n skaffold-257000: exit status 7 (107.781965ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 15:25:31.806070    8386 status.go:249] status error: host: state: unknown state "skaffold-257000": docker container inspect skaffold-257000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-257000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-257000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-257000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-257000
--- FAIL: TestSkaffold (300.89s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-023000 --memory=2048 --output=json --wait=true --driver=docker 
E1212 15:27:15.912133    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 15:27:42.645618    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-023000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003403579s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5e06ee8-551a-4056-b000-2ebca6c4b4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-023000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac52853e-1575-46cb-99fa-11baa254df81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17761"}}
	{"specversion":"1.0","id":"a85c035c-c627-4619-8e7a-a59e83169f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig"}}
	{"specversion":"1.0","id":"6354e0eb-ae49-44fa-8958-e251ebcb9119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"888ccb03-e870-4a75-bc2f-e85ef003faec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2e66fd0c-3461-4e86-b23a-3701544806e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube"}}
	{"specversion":"1.0","id":"9353cb1b-ecfc-4c93-a397-00e2038bb4f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c8ebcdee-393c-4b23-85cc-f75722dd76ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"33651db3-419e-41b1-a0d8-623562e8a38c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a079af05-7218-4e00-862f-7d3c2c27c760","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b6921eb-2cae-4904-98a6-f00cf37461f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"df3a69da-086c-46a4-8a0a-8b74b71ab5a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-023000 in cluster insufficient-storage-023000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d14b9b0d-68fd-45cf-98cc-b3f86989ea3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702394725-17761 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"02c9e7b6-178b-4818-8089-512b95a4da15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-023000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-023000 --output=json --layout=cluster: context deadline exceeded (1.278µs)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-023000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    

Test pass (142/189)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.11
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.28.4/json-events 35.11
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.29
17 TestDownloadOnly/v1.29.0-rc.2/json-events 67.08
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.34
23 TestDownloadOnly/DeleteAll 0.65
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
25 TestDownloadOnlyKic 2.06
26 TestBinaryMirror 1.6
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
32 TestAddons/Setup 157.61
36 TestAddons/parallel/InspektorGadget 10.91
37 TestAddons/parallel/MetricsServer 5.8
38 TestAddons/parallel/HelmTiller 10.12
40 TestAddons/parallel/CSI 76.41
41 TestAddons/parallel/Headlamp 15.56
42 TestAddons/parallel/CloudSpanner 5.87
43 TestAddons/parallel/LocalPath 56.01
44 TestAddons/parallel/NvidiaDevicePlugin 5.71
47 TestAddons/serial/GCPAuth/Namespaces 0.1
48 TestAddons/StoppedEnableDisable 11.81
56 TestHyperKitDriverInstallOrUpdate 7
59 TestErrorSpam/setup 20.72
60 TestErrorSpam/start 2.08
61 TestErrorSpam/status 1.18
62 TestErrorSpam/pause 1.63
63 TestErrorSpam/unpause 1.81
64 TestErrorSpam/stop 11.53
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 38.45
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.03
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.3
76 TestFunctional/serial/CacheCmd/cache/add_local 1.68
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
81 TestFunctional/serial/CacheCmd/cache/delete 0.16
82 TestFunctional/serial/MinikubeKubectlCmd 0.56
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.8
84 TestFunctional/serial/ExtraConfig 38.37
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 3.08
87 TestFunctional/serial/LogsFileCmd 3.15
88 TestFunctional/serial/InvalidService 4.27
90 TestFunctional/parallel/ConfigCmd 0.64
91 TestFunctional/parallel/DashboardCmd 12.51
92 TestFunctional/parallel/DryRun 1.72
93 TestFunctional/parallel/InternationalLanguage 0.91
94 TestFunctional/parallel/StatusCmd 1.28
99 TestFunctional/parallel/AddonsCmd 0.25
100 TestFunctional/parallel/PersistentVolumeClaim 27.74
102 TestFunctional/parallel/SSHCmd 0.91
103 TestFunctional/parallel/CpCmd 2.5
104 TestFunctional/parallel/MySQL 35.84
105 TestFunctional/parallel/FileSync 0.45
106 TestFunctional/parallel/CertSync 2.69
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
114 TestFunctional/parallel/License 0.59
115 TestFunctional/parallel/Version/short 0.11
116 TestFunctional/parallel/Version/components 0.89
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.41
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.05
122 TestFunctional/parallel/ImageCommands/Setup 3.09
123 TestFunctional/parallel/DockerEnv/bash 2.06
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.6
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.24
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.88
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.7
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.7
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.29
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
145 TestFunctional/parallel/ServiceCmd/DeployApp 13.18
146 TestFunctional/parallel/ServiceCmd/List 0.61
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
148 TestFunctional/parallel/ServiceCmd/HTTPS 15
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
150 TestFunctional/parallel/ProfileCmd/profile_list 0.47
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
153 TestFunctional/parallel/ServiceCmd/Format 15
155 TestFunctional/parallel/ServiceCmd/URL 15
156 TestFunctional/parallel/MountCmd/VerifyCleanup 2.45
157 TestFunctional/delete_addon-resizer_images 0.13
158 TestFunctional/delete_my-image_image 0.05
159 TestFunctional/delete_minikube_cached_images 0.05
163 TestImageBuild/serial/Setup 21.49
164 TestImageBuild/serial/NormalBuild 1.99
165 TestImageBuild/serial/BuildWithBuildArg 0.95
166 TestImageBuild/serial/BuildWithDockerIgnore 0.77
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
177 TestJSONOutput/start/Command 36.96
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.58
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.58
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 5.75
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.9
202 TestKicCustomNetwork/create_custom_network 23.68
203 TestKicCustomNetwork/use_default_bridge_network 22.73
204 TestKicExistingNetwork 23.94
205 TestKicCustomSubnet 23.56
206 TestKicStaticIP 24.2
207 TestMainNoArgs 0.08
208 TestMinikubeProfile 49.82
211 TestMountStart/serial/StartWithMountFirst 7.32
231 TestPreload 161.02
252 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.06
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.52
x
+
TestDownloadOnly/v1.16.0/json-events (16.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-637000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-637000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (16.109510413s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-637000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-637000: exit status 85 (291.665935ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-637000 | jenkins | v1.32.0 | 12 Dec 23 14:02 PST |          |
	|         | -p download-only-637000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 14:02:33
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 14:02:33.781037    1338 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:02:33.781337    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:02:33.781342    1338 out.go:309] Setting ErrFile to fd 2...
	I1212 14:02:33.781346    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:02:33.781531    1338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	W1212 14:02:33.781626    1338 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17761-876/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17761-876/.minikube/config/config.json: no such file or directory
	I1212 14:02:33.783352    1338 out.go:303] Setting JSON to true
	I1212 14:02:33.808206    1338 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":123,"bootTime":1702418430,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:02:33.808295    1338 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:02:33.830574    1338 out.go:97] [download-only-637000] minikube v1.32.0 on Darwin 14.2
	I1212 14:02:33.852236    1338 out.go:169] MINIKUBE_LOCATION=17761
	I1212 14:02:33.830799    1338 notify.go:220] Checking for updates...
	W1212 14:02:33.830828    1338 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 14:02:33.896365    1338 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:02:33.918485    1338 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:02:33.939500    1338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:02:33.961348    1338 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	W1212 14:02:34.003468    1338 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 14:02:34.003952    1338 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:02:34.063802    1338 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:02:34.063932    1338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:02:34.166056    1338 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:false NGoroutines:51 SystemTime:2023-12-12 22:02:34.156405591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:02:34.187379    1338 out.go:97] Using the docker driver based on user configuration
	I1212 14:02:34.187416    1338 start.go:298] selected driver: docker
	I1212 14:02:34.187427    1338 start.go:902] validating driver "docker" against <nil>
	I1212 14:02:34.187613    1338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:02:34.293665    1338 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:false NGoroutines:51 SystemTime:2023-12-12 22:02:34.283423286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:02:34.293854    1338 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 14:02:34.298836    1338 start_flags.go:394] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I1212 14:02:34.299012    1338 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 14:02:34.320413    1338 out.go:169] Using Docker Desktop driver with root privileges
	I1212 14:02:34.341577    1338 cni.go:84] Creating CNI manager for ""
	I1212 14:02:34.341617    1338 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 14:02:34.341638    1338 start_flags.go:323] config:
	{Name:download-only-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-637000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:02:34.363126    1338 out.go:97] Starting control plane node download-only-637000 in cluster download-only-637000
	I1212 14:02:34.363153    1338 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 14:02:34.384158    1338 out.go:97] Pulling base image v0.0.42-1702394725-17761 ...
	I1212 14:02:34.384227    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 14:02:34.384315    1338 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 14:02:34.436265    1338 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 14:02:34.436343    1338 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 14:02:34.436359    1338 cache.go:56] Caching tarball of preloaded images
	I1212 14:02:34.436486    1338 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 14:02:34.436506    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 14:02:34.436631    1338 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 14:02:34.457328    1338 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 14:02:34.457346    1338 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:02:34.534276    1338 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 14:02:39.807425    1338 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:02:39.807600    1338 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:02:40.353242    1338 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1212 14:02:40.353478    1338 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/download-only-637000/config.json ...
	I1212 14:02:40.353501    1338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/download-only-637000/config.json: {Name:mkd29b15c8d968578a4efafb91ac45d9e58e6626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 14:02:40.353789    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 14:02:40.354057    1338 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-637000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (35.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-637000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-637000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (35.107263824s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (35.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-637000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-637000: exit status 85 (290.133592ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-637000 | jenkins | v1.32.0 | 12 Dec 23 14:02 PST |          |
	|         | -p download-only-637000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-637000 | jenkins | v1.32.0 | 12 Dec 23 14:02 PST |          |
	|         | -p download-only-637000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 14:02:50
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 14:02:50.187049    1400 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:02:50.187244    1400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:02:50.187250    1400 out.go:309] Setting ErrFile to fd 2...
	I1212 14:02:50.187259    1400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:02:50.187440    1400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	W1212 14:02:50.187533    1400 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17761-876/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17761-876/.minikube/config/config.json: no such file or directory
	I1212 14:02:50.188826    1400 out.go:303] Setting JSON to true
	I1212 14:02:50.211192    1400 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":140,"bootTime":1702418430,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:02:50.211301    1400 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:02:50.232931    1400 out.go:97] [download-only-637000] minikube v1.32.0 on Darwin 14.2
	I1212 14:02:50.254034    1400 out.go:169] MINIKUBE_LOCATION=17761
	I1212 14:02:50.233063    1400 notify.go:220] Checking for updates...
	I1212 14:02:50.296812    1400 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:02:50.339974    1400 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:02:50.381802    1400 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:02:50.402908    1400 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	W1212 14:02:50.445001    1400 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 14:02:50.445659    1400 config.go:182] Loaded profile config "download-only-637000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1212 14:02:50.445723    1400 start.go:810] api.Load failed for download-only-637000: filestore "download-only-637000": Docker machine "download-only-637000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:02:50.445838    1400 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 14:02:50.445865    1400 start.go:810] api.Load failed for download-only-637000: filestore "download-only-637000": Docker machine "download-only-637000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:02:50.510393    1400 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:02:50.510545    1400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:02:50.611924    1400 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:false NGoroutines:53 SystemTime:2023-12-12 22:02:50.601659089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:02:50.632945    1400 out.go:97] Using the docker driver based on existing profile
	I1212 14:02:50.632983    1400 start.go:298] selected driver: docker
	I1212 14:02:50.632995    1400 start.go:902] validating driver "docker" against &{Name:download-only-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-637000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:02:50.633286    1400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:02:50.735801    1400 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:false NGoroutines:53 SystemTime:2023-12-12 22:02:50.726710085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:02:50.738911    1400 cni.go:84] Creating CNI manager for ""
	I1212 14:02:50.738936    1400 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 14:02:50.738947    1400 start_flags.go:323] config:
	{Name:download-only-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-637000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:02:50.760497    1400 out.go:97] Starting control plane node download-only-637000 in cluster download-only-637000
	I1212 14:02:50.760518    1400 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 14:02:50.781768    1400 out.go:97] Pulling base image v0.0.42-1702394725-17761 ...
	I1212 14:02:50.781875    1400 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:02:50.781952    1400 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 14:02:50.832834    1400 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 14:02:50.832993    1400 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 14:02:50.833014    1400 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory, skipping pull
	I1212 14:02:50.833021    1400 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in cache, skipping pull
	I1212 14:02:50.833030    1400 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 as a tarball
	I1212 14:02:50.834562    1400 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 14:02:50.834575    1400 cache.go:56] Caching tarball of preloaded images
	I1212 14:02:50.834746    1400 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:02:50.855908    1400 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 14:02:50.855937    1400 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:02:50.941240    1400 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 14:02:56.113869    1400 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:02:56.114067    1400 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:02:56.736464    1400 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 14:02:56.736543    1400 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/download-only-637000/config.json ...
	I1212 14:02:56.736879    1400 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 14:02:56.737101    1400 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-637000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (67.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-637000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-637000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (1m7.081615373s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (67.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-637000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-637000: exit status 85 (341.679563ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-637000 | jenkins | v1.32.0 | 12 Dec 23 14:02 PST |          |
	|         | -p download-only-637000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-637000 | jenkins | v1.32.0 | 12 Dec 23 14:02 PST |          |
	|         | -p download-only-637000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-637000 | jenkins | v1.32.0 | 12 Dec 23 14:03 PST |          |
	|         | -p download-only-637000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 14:03:25
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 14:03:25.585832    1450 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:03:25.586048    1450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:03:25.586053    1450 out.go:309] Setting ErrFile to fd 2...
	I1212 14:03:25.586057    1450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:03:25.586234    1450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	W1212 14:03:25.586338    1450 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17761-876/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17761-876/.minikube/config/config.json: no such file or directory
	I1212 14:03:25.587567    1450 out.go:303] Setting JSON to true
	I1212 14:03:25.609510    1450 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":175,"bootTime":1702418430,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:03:25.609608    1450 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:03:25.630951    1450 out.go:97] [download-only-637000] minikube v1.32.0 on Darwin 14.2
	I1212 14:03:25.652700    1450 out.go:169] MINIKUBE_LOCATION=17761
	I1212 14:03:25.631121    1450 notify.go:220] Checking for updates...
	I1212 14:03:25.695667    1450 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:03:25.717640    1450 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:03:25.738807    1450 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:03:25.759883    1450 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	W1212 14:03:25.802577    1450 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 14:03:25.802983    1450 config.go:182] Loaded profile config "download-only-637000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1212 14:03:25.803032    1450 start.go:810] api.Load failed for download-only-637000: filestore "download-only-637000": Docker machine "download-only-637000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:03:25.803114    1450 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 14:03:25.803135    1450 start.go:810] api.Load failed for download-only-637000: filestore "download-only-637000": Docker machine "download-only-637000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 14:03:25.861783    1450 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:03:25.861913    1450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:03:25.963676    1450 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:false NGoroutines:53 SystemTime:2023-12-12 22:03:25.953870503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:03:25.984734    1450 out.go:97] Using the docker driver based on existing profile
	I1212 14:03:25.984812    1450 start.go:298] selected driver: docker
	I1212 14:03:25.984824    1450 start.go:902] validating driver "docker" against &{Name:download-only-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-637000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:03:25.985101    1450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:03:26.085846    1450 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:false NGoroutines:53 SystemTime:2023-12-12 22:03:26.076755341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:03:26.088955    1450 cni.go:84] Creating CNI manager for ""
	I1212 14:03:26.088985    1450 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 14:03:26.088996    1450 start_flags.go:323] config:
	{Name:download-only-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-637000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs
:}
	I1212 14:03:26.109874    1450 out.go:97] Starting control plane node download-only-637000 in cluster download-only-637000
	I1212 14:03:26.109905    1450 cache.go:121] Beginning downloading kic base image for docker with docker
	I1212 14:03:26.131100    1450 out.go:97] Pulling base image v0.0.42-1702394725-17761 ...
	I1212 14:03:26.131180    1450 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 14:03:26.131273    1450 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 14:03:26.182347    1450 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 14:03:26.182604    1450 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 14:03:26.182622    1450 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory, skipping pull
	I1212 14:03:26.182628    1450 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in cache, skipping pull
	I1212 14:03:26.182637    1450 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 as a tarball
	I1212 14:03:26.183200    1450 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 14:03:26.183213    1450 cache.go:56] Caching tarball of preloaded images
	I1212 14:03:26.183369    1450 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 14:03:26.204932    1450 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 14:03:26.204960    1450 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:03:26.277655    1450 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 14:03:57.400130    1450 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:03:57.400302    1450 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17761-876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 14:03:57.938115    1450 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1212 14:03:57.938195    1450 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/download-only-637000/config.json ...
	I1212 14:03:57.938560    1450 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 14:03:57.938789    1450 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17761-876/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-637000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-637000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-644000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-644000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-644000
--- PASS: TestDownloadOnlyKic (2.06s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-891000 --alsologtostderr --binary-mirror http://127.0.0.1:49346 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-891000
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-631000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-631000: exit status 85 (209.210593ms)

                                                
                                                
-- stdout --
	* Profile "addons-631000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-631000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-631000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-631000: exit status 85 (189.325176ms)

                                                
                                                
-- stdout --
	* Profile "addons-631000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-631000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (157.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-631000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-631000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m37.606105227s)
--- PASS: TestAddons/Setup (157.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-snkf9" [b532351d-70cd-4f74-8e16-9b8b8d712097] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010452351s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-631000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-631000: (5.897101106s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.331506ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-cphkd" [c2992afb-9d99-4adf-8bf8-ade449751dc4] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017769038s
addons_test.go:414: (dbg) Run:  kubectl --context addons-631000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.82797ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-kwzmm" [d99ff095-7f9d-4440-9977-2436a32adbbf] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010496384s
addons_test.go:472: (dbg) Run:  kubectl --context addons-631000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-631000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.326596878s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 13.619054ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-631000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-631000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ef078af3-e03b-485e-b3cf-b0c98cd2cec9] Pending
helpers_test.go:344: "task-pv-pod" [ef078af3-e03b-485e-b3cf-b0c98cd2cec9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ef078af3-e03b-485e-b3cf-b0c98cd2cec9] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.01192372s
addons_test.go:583: (dbg) Run:  kubectl --context addons-631000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-631000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-631000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-631000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-631000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-631000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-631000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-631000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0ca2ab7-10b8-4500-8063-19f596ea2638] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0ca2ab7-10b8-4500-8063-19f596ea2638] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0ca2ab7-10b8-4500-8063-19f596ea2638] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.012686335s
addons_test.go:625: (dbg) Run:  kubectl --context addons-631000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-631000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-631000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-631000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.857247633s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-darwin-amd64 -p addons-631000 addons disable volumesnapshots --alsologtostderr -v=1: (1.038220713s)
--- PASS: TestAddons/parallel/CSI (76.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-631000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-631000 --alsologtostderr -v=1: (1.543590143s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-vzvqv" [2135b45b-114a-4fdc-83e6-1d73e26685b3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-vzvqv" [2135b45b-114a-4fdc-83e6-1d73e26685b3] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.010902429s
--- PASS: TestAddons/parallel/Headlamp (15.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-9tc7t" [015e2f7b-b614-43c4-b160-d602b45a6e61] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010067458s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-631000
--- PASS: TestAddons/parallel/CloudSpanner (5.87s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-631000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-631000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5ffc37fc-7a72-4928-afb4-fb5e0ff15702] Pending
helpers_test.go:344: "test-local-path" [5ffc37fc-7a72-4928-afb4-fb5e0ff15702] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5ffc37fc-7a72-4928-afb4-fb5e0ff15702] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5ffc37fc-7a72-4928-afb4-fb5e0ff15702] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.044163614s
addons_test.go:890: (dbg) Run:  kubectl --context addons-631000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 ssh "cat /opt/local-path-provisioner/pvc-d670a280-d5d5-4a9f-bcdb-496cfb2086b3_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-631000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-631000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-631000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.720680531s)
--- PASS: TestAddons/parallel/LocalPath (56.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h975x" [9d802fe6-bc0f-4f9e-a760-6450a992d353] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.012059679s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-631000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-631000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-631000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-631000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-631000: (11.091081584s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-631000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-631000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-631000
--- PASS: TestAddons/StoppedEnableDisable (11.81s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.00s)

                                                
                                    
x
+
TestErrorSpam/setup (20.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-737000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-737000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 --driver=docker : (20.715622447s)
--- PASS: TestErrorSpam/setup (20.72s)

                                                
                                    
x
+
TestErrorSpam/start (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 start --dry-run
--- PASS: TestErrorSpam/start (2.08s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (11.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 stop: (10.891792353s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-737000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-737000 stop
--- PASS: TestErrorSpam/stop (11.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17761-876/.minikube/files/etc/test/nested/copy/1336/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-386000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-386000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (38.44596996s)
--- PASS: TestFunctional/serial/StartWithProxy (38.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-386000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-386000 --alsologtostderr -v=8: (39.026797013s)
functional_test.go:659: soft start took 39.027343296s for "functional-386000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-386000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 cache add registry.k8s.io/pause:3.1: (1.51086255s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 cache add registry.k8s.io/pause:3.3: (1.453083472s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 cache add registry.k8s.io/pause:latest: (1.331628609s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local294622525/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cache add minikube-local-cache-test:functional-386000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 cache add minikube-local-cache-test:functional-386000: (1.055017442s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cache delete minikube-local-cache-test:functional-386000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-386000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (382.594466ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 kubectl -- --context functional-386000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-386000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.80s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-386000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 14:12:15.829155    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:15.835149    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:15.845310    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:15.867002    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:15.907718    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:15.989114    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:16.149690    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:16.471455    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:17.112726    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:18.393231    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
E1212 14:12:20.953855    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-386000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.370342648s)
functional_test.go:757: restart took 38.370474186s for "functional-386000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-386000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 logs
E1212 14:12:26.074143    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 logs: (3.084562855s)
--- PASS: TestFunctional/serial/LogsCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd3715796447/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd3715796447/001/logs.txt: (3.145679467s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-386000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-386000: exit status 115 (544.770957ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31744 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-386000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 config get cpus: exit status 14 (55.958717ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 config get cpus
E1212 14:12:36.314836    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 config get cpus: exit status 14 (80.789598ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-386000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-386000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4123: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (877.523904ms)

                                                
                                                
-- stdout --
	* [functional-386000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:14:18.340155    4049 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:14:18.340511    4049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:14:18.340519    4049 out.go:309] Setting ErrFile to fd 2...
	I1212 14:14:18.340524    4049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:14:18.340790    4049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:14:18.342859    4049 out.go:303] Setting JSON to false
	I1212 14:14:18.383138    4049 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":828,"bootTime":1702418430,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:14:18.383269    4049 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:14:18.406061    4049 out.go:177] * [functional-386000] minikube v1.32.0 on Darwin 14.2
	I1212 14:14:18.470008    4049 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 14:14:18.448227    4049 notify.go:220] Checking for updates...
	I1212 14:14:18.512055    4049 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:14:18.578097    4049 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:14:18.630923    4049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:14:18.673890    4049 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 14:14:18.778739    4049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 14:14:18.817220    4049 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:14:18.817679    4049 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:14:18.875820    4049 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:14:18.875965    4049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:14:18.995527    4049 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:59 SystemTime:2023-12-12 22:14:18.984489735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:14:19.017244    4049 out.go:177] * Using the docker driver based on existing profile
	I1212 14:14:19.038029    4049 start.go:298] selected driver: docker
	I1212 14:14:19.038047    4049 start.go:902] validating driver "docker" against &{Name:functional-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-386000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:14:19.038150    4049 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 14:14:19.062796    4049 out.go:177] 
	W1212 14:14:19.084888    4049 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 14:14:19.105866    4049 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-386000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (911.939519ms)

                                                
                                                
-- stdout --
	* [functional-386000] minikube v1.32.0 sur Darwin 14.2
	  - MINIKUBE_LOCATION=17761
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 14:14:17.920502    4037 out.go:296] Setting OutFile to fd 1 ...
	I1212 14:14:17.920819    4037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:14:17.920826    4037 out.go:309] Setting ErrFile to fd 2...
	I1212 14:14:17.920830    4037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 14:14:17.921043    4037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
	I1212 14:14:17.922804    4037 out.go:303] Setting JSON to false
	I1212 14:14:17.949409    4037 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":827,"bootTime":1702418430,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1212 14:14:17.949526    4037 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 14:14:17.971475    4037 out.go:177] * [functional-386000] minikube v1.32.0 sur Darwin 14.2
	I1212 14:14:18.035111    4037 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 14:14:18.014085    4037 notify.go:220] Checking for updates...
	I1212 14:14:18.076820    4037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
	I1212 14:14:18.098039    4037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 14:14:18.118973    4037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 14:14:18.161049    4037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube
	I1212 14:14:18.202907    4037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 14:14:18.224563    4037 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 14:14:18.225091    4037 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 14:14:18.294412    4037 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1212 14:14:18.294573    4037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 14:14:18.493033    4037 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:59 SystemTime:2023-12-12 22:14:18.449830463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221279232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1212 14:14:18.554123    4037 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1212 14:14:18.578121    4037 start.go:298] selected driver: docker
	I1212 14:14:18.578149    4037 start.go:902] validating driver "docker" against &{Name:functional-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-386000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 14:14:18.578304    4037 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 14:14:18.630935    4037 out.go:177] 
	W1212 14:14:18.652732    4037 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 14:14:18.695948    4037 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [11dfcbe5-6d3a-427a-b50f-2c93fa27c937] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01463802s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-386000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-386000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a05af8d9-0c0f-4d34-a628-37013fd6d6dd] Pending
helpers_test.go:344: "sp-pod" [a05af8d9-0c0f-4d34-a628-37013fd6d6dd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a05af8d9-0c0f-4d34-a628-37013fd6d6dd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.013206369s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-386000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-386000 delete -f testdata/storage-provisioner/pod.yaml
E1212 14:13:37.756198    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ec13a294-f0e9-4e20-98c0-3a10fb0a4d5c] Pending
helpers_test.go:344: "sp-pod" [ec13a294-f0e9-4e20-98c0-3a10fb0a4d5c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ec13a294-f0e9-4e20-98c0-3a10fb0a4d5c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011832348s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-386000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh -n functional-386000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cp functional-386000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2635642309/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh -n functional-386000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh -n functional-386000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-386000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-kjlb9" [5c010505-ac01-428c-ae7e-4f5b2fa84540] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-kjlb9" [5c010505-ac01-428c-ae7e-4f5b2fa84540] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.056979303s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;": exit status 1 (126.233607ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;": exit status 1 (121.695312ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;": exit status 1 (121.155606ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-386000 exec mysql-859648c796-kjlb9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1336/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /etc/test/nested/copy/1336/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1336.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/1336.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1336.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /usr/share/ca-certificates/1336.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/13362.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/13362.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/13362.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /usr/share/ca-certificates/13362.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-386000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "sudo systemctl is-active crio": exit status 1 (447.592153ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-386000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-386000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-386000
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-386000 image ls --format short --alsologtostderr:
I1212 14:14:29.182076    4160 out.go:296] Setting OutFile to fd 1 ...
I1212 14:14:29.182324    4160 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:29.182331    4160 out.go:309] Setting ErrFile to fd 2...
I1212 14:14:29.182335    4160 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:29.182517    4160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
I1212 14:14:29.183187    4160 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:29.183277    4160 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:29.183700    4160 cli_runner.go:164] Run: docker container inspect functional-386000 --format={{.State.Status}}
I1212 14:14:29.237399    4160 ssh_runner.go:195] Run: systemctl --version
I1212 14:14:29.237475    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386000
I1212 14:14:29.291351    4160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50008 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/functional-386000/id_rsa Username:docker}
I1212 14:14:29.379094    4160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-386000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-386000 | 3860706a494a5 | 30B    |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-386000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-386000 image ls --format table --alsologtostderr:
I1212 14:14:30.126395    4179 out.go:296] Setting OutFile to fd 1 ...
I1212 14:14:30.126717    4179 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:30.126725    4179 out.go:309] Setting ErrFile to fd 2...
I1212 14:14:30.126732    4179 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:30.126943    4179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
I1212 14:14:30.127639    4179 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:30.127750    4179 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:30.128163    4179 cli_runner.go:164] Run: docker container inspect functional-386000 --format={{.State.Status}}
I1212 14:14:30.188501    4179 ssh_runner.go:195] Run: systemctl --version
I1212 14:14:30.188585    4179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386000
I1212 14:14:30.252775    4179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50008 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/functional-386000/id_rsa Username:docker}
I1212 14:14:30.414761    4179 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-386000 image ls --format json --alsologtostderr:
[{"id":"3860706a494a5d19b99a854a800cf8b945985c908f5a8e60c9cbb1dfd69356f7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-386000"],"size":"30"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27
126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-386000"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTag
s":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-386000 image ls --format json --alsologtostderr:
I1212 14:14:29.822014    4173 out.go:296] Setting OutFile to fd 1 ...
I1212 14:14:29.822251    4173 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:29.822257    4173 out.go:309] Setting ErrFile to fd 2...
I1212 14:14:29.822261    4173 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:29.822435    4173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
I1212 14:14:29.823145    4173 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:29.823247    4173 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:29.823675    4173 cli_runner.go:164] Run: docker container inspect functional-386000 --format={{.State.Status}}
I1212 14:14:29.876083    4173 ssh_runner.go:195] Run: systemctl --version
I1212 14:14:29.876160    4173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386000
I1212 14:14:29.929429    4173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50008 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/functional-386000/id_rsa Username:docker}
I1212 14:14:30.015774    4173 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-386000 image ls --format yaml --alsologtostderr:
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-386000
size: "32900000"
- id: 3860706a494a5d19b99a854a800cf8b945985c908f5a8e60c9cbb1dfd69356f7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-386000
size: "30"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-386000 image ls --format yaml --alsologtostderr:
I1212 14:14:29.520329    4167 out.go:296] Setting OutFile to fd 1 ...
I1212 14:14:29.520656    4167 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:29.520662    4167 out.go:309] Setting ErrFile to fd 2...
I1212 14:14:29.520667    4167 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:29.520858    4167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
I1212 14:14:29.521545    4167 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:29.521653    4167 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:29.522111    4167 cli_runner.go:164] Run: docker container inspect functional-386000 --format={{.State.Status}}
I1212 14:14:29.576756    4167 ssh_runner.go:195] Run: systemctl --version
I1212 14:14:29.576835    4167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386000
I1212 14:14:29.632599    4167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50008 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/functional-386000/id_rsa Username:docker}
I1212 14:14:29.719738    4167 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh pgrep buildkitd: exit status 1 (450.712396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image build -t localhost/my-image:functional-386000 testdata/build --alsologtostderr
2023/12/12 14:14:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image build -t localhost/my-image:functional-386000 testdata/build --alsologtostderr: (2.301337817s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-386000 image build -t localhost/my-image:functional-386000 testdata/build --alsologtostderr:
I1212 14:14:30.989093    4195 out.go:296] Setting OutFile to fd 1 ...
I1212 14:14:30.989400    4195 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:30.989407    4195 out.go:309] Setting ErrFile to fd 2...
I1212 14:14:30.989412    4195 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 14:14:30.989611    4195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17761-876/.minikube/bin
I1212 14:14:30.990288    4195 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:30.990908    4195 config.go:182] Loaded profile config "functional-386000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 14:14:30.991364    4195 cli_runner.go:164] Run: docker container inspect functional-386000 --format={{.State.Status}}
I1212 14:14:31.095232    4195 ssh_runner.go:195] Run: systemctl --version
I1212 14:14:31.095356    4195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386000
I1212 14:14:31.154849    4195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50008 SSHKeyPath:/Users/jenkins/minikube-integration/17761-876/.minikube/machines/functional-386000/id_rsa Username:docker}
I1212 14:14:31.242197    4195 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.360781245.tar
I1212 14:14:31.242296    4195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 14:14:31.250770    4195 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.360781245.tar
I1212 14:14:31.254688    4195 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.360781245.tar: stat -c "%s %y" /var/lib/minikube/build/build.360781245.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.360781245.tar': No such file or directory
I1212 14:14:31.254721    4195 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.360781245.tar --> /var/lib/minikube/build/build.360781245.tar (3072 bytes)
I1212 14:14:31.275519    4195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.360781245
I1212 14:14:31.284010    4195 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.360781245 -xf /var/lib/minikube/build/build.360781245.tar
I1212 14:14:31.293742    4195 docker.go:346] Building image: /var/lib/minikube/build/build.360781245
I1212 14:14:31.293812    4195 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-386000 /var/lib/minikube/build/build.360781245
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:aaec9d4f517f7b8b6063afde41902d6155601057ed2dd00b8f9f19be128ee97b done
#8 naming to localhost/my-image:functional-386000 done
#8 DONE 0.0s
I1212 14:14:33.185041    4195 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-386000 /var/lib/minikube/build/build.360781245: (1.891195448s)
I1212 14:14:33.185114    4195 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.360781245
I1212 14:14:33.194072    4195 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.360781245.tar
I1212 14:14:33.202770    4195 build_images.go:207] Built localhost/my-image:functional-386000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.360781245.tar
I1212 14:14:33.202803    4195 build_images.go:123] succeeded building to: functional-386000
I1212 14:14:33.202807    4195 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.014524872s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-386000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-386000 docker-env) && out/minikube-darwin-amd64 status -p functional-386000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-386000 docker-env) && out/minikube-darwin-amd64 status -p functional-386000": (1.234892314s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-386000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image load --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image load --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr: (3.966198989s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image load --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image load --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr: (2.236955274s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.581529767s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-386000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image load --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image load --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr: (5.056264328s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image save gcr.io/google-containers/addon-resizer:functional-386000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image save gcr.io/google-containers/addon-resizer:functional-386000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.880043693s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image rm gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr
E1212 14:12:56.795325    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.349281264s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-386000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 image save --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-386000 image save --daemon gcr.io/google-containers/addon-resizer:functional-386000 --alsologtostderr: (1.578987805s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-386000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-386000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-386000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-386000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-386000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3519: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-386000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [07e23831-664f-4aa6-9c7c-00c235741c21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [07e23831-664f-4aa6-9c7c-00c235741c21] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.015176639s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-386000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-386000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3549: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-386000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-386000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-hcln2" [90eaf7c8-e6c0-4ba2-a2df-efc6918a59d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-hcln2" [90eaf7c8-e6c0-4ba2-a2df-efc6918a59d9] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.013217242s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 service list -o json
functional_test.go:1493: Took "600.273537ms" to run "out/minikube-darwin-amd64 -p functional-386000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 service --namespace=default --https --url hello-node: signal: killed (15.001871694s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50317

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:50317
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "392.232354ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "78.2818ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "390.341394ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "78.431802ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 service hello-node --url --format={{.IP}}: signal: killed (15.002662564s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 service hello-node --url: signal: killed (15.003115184s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50361

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:50361
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3243700987/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3243700987/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3243700987/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T" /mount1: exit status 1 (496.293568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-386000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3243700987/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3243700987/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3243700987/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-386000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-386000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-386000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-957000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-957000 --driver=docker : (21.488189059s)
--- PASS: TestImageBuild/serial/Setup (21.49s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-957000
E1212 14:14:59.678462    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/addons-631000/client.crt: no such file or directory
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-957000: (1.988662021s)
--- PASS: TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-957000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-957000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-957000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-775000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1212 14:23:10.255818    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17761-876/.minikube/profiles/functional-386000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-775000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.963459238s)
--- PASS: TestJSONOutput/start/Command (36.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-775000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-775000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-775000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-775000 --output=json --user=testUser: (5.747491582s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.9s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-296000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-296000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (512.782641ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce550ecf-f177-402c-835c-27a7e07c774f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-296000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b3559d0-e7d4-470f-9762-421d5cce5205","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17761"}}
	{"specversion":"1.0","id":"ebbbfcb3-14df-4147-953d-75bc84f187ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig"}}
	{"specversion":"1.0","id":"ec0b0bf6-b445-4519-9dc5-04db5a247727","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f79d8d5c-6620-4fc6-81dc-646895e26bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"70e01aa1-3d0f-451b-9b9a-09f9d006b507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17761-876/.minikube"}}
	{"specversion":"1.0","id":"1d2caa33-374b-43d2-9904-820bc0bc618f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c3cb2655-b5be-4764-a974-79dd3b0b3c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-296000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-296000
--- PASS: TestErrorJSONOutput (0.90s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-321000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-321000 --network=: (21.246639063s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-321000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-321000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-321000: (2.379836247s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-366000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-366000 --network=bridge: (20.427180307s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-366000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-366000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-366000: (2.250728093s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.73s)

                                                
                                    
x
+
TestKicExistingNetwork (23.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-498000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-498000 --network=existing-network: (21.329953207s)
helpers_test.go:175: Cleaning up "existing-network-498000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-498000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-498000: (2.273704109s)
--- PASS: TestKicExistingNetwork (23.94s)

                                                
                                    
x
+
TestKicCustomSubnet (23.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-661000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-661000 --subnet=192.168.60.0/24: (21.066303507s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-661000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-661000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-661000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-661000: (2.445058983s)
--- PASS: TestKicCustomSubnet (23.56s)

                                                
                                    
x
+
TestKicStaticIP (24.2s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-229000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-229000 --static-ip=192.168.200.200: (21.540973513s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-229000 ip
helpers_test.go:175: Cleaning up "static-ip-229000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-229000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-229000: (2.411863557s)
--- PASS: TestKicStaticIP (24.20s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (49.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-944000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-944000 --driver=docker : (20.911967019s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-946000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-946000 --driver=docker : (22.327726341s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-944000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-946000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-946000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-946000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-946000: (2.486702755s)
helpers_test.go:175: Cleaning up "first-944000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-944000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-944000: (2.470590592s)
--- PASS: TestMinikubeProfile (49.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-242000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-242000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.320412098s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.32s)

                                                
                                    
x
+
TestPreload (161.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-661000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-661000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m32.005589963s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-661000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-661000 image pull gcr.io/k8s-minikube/busybox: (2.133535867s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-661000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-661000: (10.842637155s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-661000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-661000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (53.212694883s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-661000 image list
helpers_test.go:175: Cleaning up "test-preload-661000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-661000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-661000: (2.537923213s)
--- PASS: TestPreload (161.02s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17761
- KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3943462536/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3943462536/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3943462536/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3943462536/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.06s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17761
- KUBECONFIG=/Users/jenkins/minikube-integration/17761-876/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3995200408/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3995200408/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3995200408/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3995200408/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.52s)

                                                
                                    

Test skip (21/189)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 13.269283ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tkgfr" [5926cf68-95ca-438f-9e88-2d58098f1e19] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012389501s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tbhln" [13e220ef-bd9b-41e7-b507-26488a1ed56f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012292969s
addons_test.go:339: (dbg) Run:  kubectl --context addons-631000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-631000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-631000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.998026062s)
addons_test.go:354: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-631000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-631000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-631000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d4f12694-b7f7-465a-b544-e0e865eb167f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d4f12694-b7f7-465a-b544-e0e865eb167f] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.011021778s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-631000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.20s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-386000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-386000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-qdmvv" [58be6159-cb09-48fc-9622-6951f7e72463] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-qdmvv" [58be6159-cb09-48fc-9622-6951f7e72463] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.012207269s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port732766601/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702419226610775000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port732766601/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702419226610775000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port732766601/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702419226610775000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port732766601/001/test-1702419226610775000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.993859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.114693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.493562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.762421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.789643ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.440819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.950298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "sudo umount -f /mount-9p": exit status 1 (352.592884ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-amd64 -p functional-386000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port732766601/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port362820436/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.269624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.217575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.590073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.98048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.717044ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.686615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.820208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-386000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-386000 ssh "sudo umount -f /mount-9p": exit status 1 (352.5511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-386000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-386000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port362820436/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.88s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard