Test Report: Docker_macOS 17857

                    
                      6e3ba89264b64b7b6259573ef051dd85e83461cf:2023-12-26:32448
                    
                

Test fail (26/190)

x
+
TestOffline (751.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-595000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-595000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m30.451451085s)

                                                
                                                
-- stdout --
	* [offline-docker-595000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-595000 in cluster offline-docker-595000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-595000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 15:12:24.792893    8849 out.go:296] Setting OutFile to fd 1 ...
	I1226 15:12:24.793183    8849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:12:24.793189    8849 out.go:309] Setting ErrFile to fd 2...
	I1226 15:12:24.793193    8849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:12:24.793380    8849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 15:12:24.795211    8849 out.go:303] Setting JSON to false
	I1226 15:12:24.820534    8849 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6114,"bootTime":1703626230,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 15:12:24.820651    8849 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 15:12:24.842392    8849 out.go:177] * [offline-docker-595000] minikube v1.32.0 on Darwin 14.2.1
	I1226 15:12:24.905297    8849 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 15:12:24.884350    8849 notify.go:220] Checking for updates...
	I1226 15:12:24.947322    8849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 15:12:24.989271    8849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 15:12:25.010345    8849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 15:12:25.031256    8849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 15:12:25.052285    8849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 15:12:25.073584    8849 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 15:12:25.138254    8849 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 15:12:25.138455    8849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:12:25.295566    8849 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-12-26 23:12:25.282252974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:12:25.318021    8849 out.go:177] * Using the docker driver based on user configuration
	I1226 15:12:25.339796    8849 start.go:298] selected driver: docker
	I1226 15:12:25.339813    8849 start.go:902] validating driver "docker" against <nil>
	I1226 15:12:25.339822    8849 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 15:12:25.342962    8849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:12:25.454454    8849 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-12-26 23:12:25.442669719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:12:25.454666    8849 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 15:12:25.454879    8849 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 15:12:25.475847    8849 out.go:177] * Using Docker Desktop driver with root privileges
	I1226 15:12:25.496900    8849 cni.go:84] Creating CNI manager for ""
	I1226 15:12:25.496925    8849 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 15:12:25.496934    8849 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1226 15:12:25.496946    8849 start_flags.go:323] config:
	{Name:offline-docker-595000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-595000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 15:12:25.560942    8849 out.go:177] * Starting control plane node offline-docker-595000 in cluster offline-docker-595000
	I1226 15:12:25.602791    8849 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 15:12:25.644893    8849 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 15:12:25.686988    8849 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:12:25.687047    8849 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 15:12:25.687057    8849 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 15:12:25.687074    8849 cache.go:56] Caching tarball of preloaded images
	I1226 15:12:25.687322    8849 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 15:12:25.687333    8849 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 15:12:25.688265    8849 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/offline-docker-595000/config.json ...
	I1226 15:12:25.688337    8849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/offline-docker-595000/config.json: {Name:mk54131f5bc37b9927ce8ea572d35c5018d6a31d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 15:12:25.812079    8849 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 15:12:25.812107    8849 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 15:12:25.812133    8849 cache.go:194] Successfully downloaded all kic artifacts
	I1226 15:12:25.812208    8849 start.go:365] acquiring machines lock for offline-docker-595000: {Name:mkc57cb4fbd8dca830eb26f3cdd36af864c6929b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:12:25.812411    8849 start.go:369] acquired machines lock for "offline-docker-595000" in 184.18µs
	I1226 15:12:25.812446    8849 start.go:93] Provisioning new machine with config: &{Name:offline-docker-595000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-595000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 15:12:25.812532    8849 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:12:25.854783    8849 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:12:25.855086    8849 start.go:159] libmachine.API.Create for "offline-docker-595000" (driver="docker")
	I1226 15:12:25.855115    8849 client.go:168] LocalClient.Create starting
	I1226 15:12:25.855237    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:12:25.855299    8849 main.go:141] libmachine: Decoding PEM data...
	I1226 15:12:25.855322    8849 main.go:141] libmachine: Parsing certificate...
	I1226 15:12:25.855400    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:12:25.855438    8849 main.go:141] libmachine: Decoding PEM data...
	I1226 15:12:25.855446    8849 main.go:141] libmachine: Parsing certificate...
	I1226 15:12:25.855982    8849 cli_runner.go:164] Run: docker network inspect offline-docker-595000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:12:25.958391    8849 cli_runner.go:211] docker network inspect offline-docker-595000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:12:25.958479    8849 network_create.go:281] running [docker network inspect offline-docker-595000] to gather additional debugging logs...
	I1226 15:12:25.958509    8849 cli_runner.go:164] Run: docker network inspect offline-docker-595000
	W1226 15:12:26.016056    8849 cli_runner.go:211] docker network inspect offline-docker-595000 returned with exit code 1
	I1226 15:12:26.016105    8849 network_create.go:284] error running [docker network inspect offline-docker-595000]: docker network inspect offline-docker-595000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-595000 not found
	I1226 15:12:26.016122    8849 network_create.go:286] output of [docker network inspect offline-docker-595000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-595000 not found
	
	** /stderr **
	I1226 15:12:26.016267    8849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:12:26.075342    8849 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:12:26.075794    8849 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002161340}
	I1226 15:12:26.075814    8849 network_create.go:124] attempt to create docker network offline-docker-595000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1226 15:12:26.075889    8849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-595000 offline-docker-595000
	W1226 15:12:26.135623    8849 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-595000 offline-docker-595000 returned with exit code 1
	W1226 15:12:26.135698    8849 network_create.go:149] failed to create docker network offline-docker-595000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-595000 offline-docker-595000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1226 15:12:26.135729    8849 network_create.go:116] failed to create docker network offline-docker-595000 192.168.58.0/24, will retry: subnet is taken
	I1226 15:12:26.137361    8849 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:12:26.137875    8849 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00229bfe0}
	I1226 15:12:26.137893    8849 network_create.go:124] attempt to create docker network offline-docker-595000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1226 15:12:26.137983    8849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-595000 offline-docker-595000
	I1226 15:12:26.236822    8849 network_create.go:108] docker network offline-docker-595000 192.168.67.0/24 created
	I1226 15:12:26.236867    8849 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-595000" container
	I1226 15:12:26.237066    8849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:12:26.300293    8849 cli_runner.go:164] Run: docker volume create offline-docker-595000 --label name.minikube.sigs.k8s.io=offline-docker-595000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:12:26.467765    8849 oci.go:103] Successfully created a docker volume offline-docker-595000
	I1226 15:12:26.467897    8849 cli_runner.go:164] Run: docker run --rm --name offline-docker-595000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-595000 --entrypoint /usr/bin/test -v offline-docker-595000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:12:27.006381    8849 oci.go:107] Successfully prepared a docker volume offline-docker-595000
	I1226 15:12:27.006456    8849 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:12:27.006468    8849 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:12:27.006625    8849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-595000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:18:25.852753    8849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:18:25.852930    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:25.909224    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:18:25.909329    8849 retry.go:31] will retry after 144.736633ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:26.054511    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:26.109834    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:18:26.109947    8849 retry.go:31] will retry after 535.793508ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:26.647274    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:26.701006    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:18:26.701138    8849 retry.go:31] will retry after 659.943071ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:27.361566    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:27.415340    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	W1226 15:18:27.415448    8849 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	
	W1226 15:18:27.415474    8849 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:27.415540    8849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:18:27.415589    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:27.467604    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:18:27.467700    8849 retry.go:31] will retry after 239.826614ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:27.708098    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:27.762784    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:18:27.762879    8849 retry.go:31] will retry after 234.234819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:27.998580    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:28.052663    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:18:28.052762    8849 retry.go:31] will retry after 417.641148ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:28.471359    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:18:28.525716    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	W1226 15:18:28.525832    8849 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	
	W1226 15:18:28.525851    8849 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:28.525864    8849 start.go:128] duration metric: createHost completed in 6m2.716515869s
	I1226 15:18:28.525871    8849 start.go:83] releasing machines lock for "offline-docker-595000", held for 6m2.71665411s
	W1226 15:18:28.525884    8849 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1226 15:18:28.526337    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:28.578623    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:28.578686    8849 delete.go:82] Unable to get host status for offline-docker-595000, assuming it has already been deleted: state: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	W1226 15:18:28.578822    8849 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1226 15:18:28.578832    8849 start.go:709] Will try again in 5 seconds ...
	I1226 15:18:33.579456    8849 start.go:365] acquiring machines lock for offline-docker-595000: {Name:mkc57cb4fbd8dca830eb26f3cdd36af864c6929b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:18:33.579625    8849 start.go:369] acquired machines lock for "offline-docker-595000" in 122.158µs
	I1226 15:18:33.579656    8849 start.go:96] Skipping create...Using existing machine configuration
	I1226 15:18:33.579670    8849 fix.go:54] fixHost starting: 
	I1226 15:18:33.580113    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:33.633345    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:33.633398    8849 fix.go:102] recreateIfNeeded on offline-docker-595000: state= err=unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:33.633421    8849 fix.go:107] machineExists: false. err=machine does not exist
	I1226 15:18:33.655138    8849 out.go:177] * docker "offline-docker-595000" container is missing, will recreate.
	I1226 15:18:33.698673    8849 delete.go:124] DEMOLISHING offline-docker-595000 ...
	I1226 15:18:33.698834    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:33.753494    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	W1226 15:18:33.753543    8849 stop.go:75] unable to get state: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:33.753561    8849 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:33.753920    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:33.808489    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:33.808543    8849 delete.go:82] Unable to get host status for offline-docker-595000, assuming it has already been deleted: state: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:33.808645    8849 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-595000
	W1226 15:18:33.861055    8849 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-595000 returned with exit code 1
	I1226 15:18:33.861123    8849 kic.go:371] could not find the container offline-docker-595000 to remove it. will try anyways
	I1226 15:18:33.861186    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:33.915774    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	W1226 15:18:33.915856    8849 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:33.916055    8849 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-595000 /bin/bash -c "sudo init 0"
	W1226 15:18:33.968128    8849 cli_runner.go:211] docker exec --privileged -t offline-docker-595000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 15:18:33.968163    8849 oci.go:650] error shutdown offline-docker-595000: docker exec --privileged -t offline-docker-595000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:34.969662    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:35.023891    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:35.023955    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:35.023970    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:35.023994    8849 retry.go:31] will retry after 641.588641ms: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:35.666021    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:35.720567    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:35.720617    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:35.720628    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:35.720651    8849 retry.go:31] will retry after 747.706384ms: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:36.469267    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:36.524748    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:36.524796    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:36.524805    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:36.524830    8849 retry.go:31] will retry after 996.28852ms: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:37.521570    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:37.574825    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:37.574874    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:37.574884    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:37.574910    8849 retry.go:31] will retry after 2.331781318s: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:39.906966    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:39.961896    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:39.961946    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:39.961958    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:39.961981    8849 retry.go:31] will retry after 1.410735086s: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:41.373070    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:41.426506    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:41.426554    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:41.426564    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:41.426587    8849 retry.go:31] will retry after 5.659721477s: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:47.086630    8849 cli_runner.go:164] Run: docker container inspect offline-docker-595000 --format={{.State.Status}}
	W1226 15:18:47.140711    8849 cli_runner.go:211] docker container inspect offline-docker-595000 --format={{.State.Status}} returned with exit code 1
	I1226 15:18:47.140770    8849 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:18:47.140782    8849 oci.go:664] temporary error: container offline-docker-595000 status is  but expect it to be exited
	I1226 15:18:47.140809    8849 oci.go:88] couldn't shut down offline-docker-595000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	 
	I1226 15:18:47.140880    8849 cli_runner.go:164] Run: docker rm -f -v offline-docker-595000
	I1226 15:18:47.196578    8849 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-595000
	W1226 15:18:47.249354    8849 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-595000 returned with exit code 1
	I1226 15:18:47.249473    8849 cli_runner.go:164] Run: docker network inspect offline-docker-595000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:18:47.302480    8849 cli_runner.go:164] Run: docker network rm offline-docker-595000
	I1226 15:18:47.420627    8849 fix.go:114] Sleeping 1 second for extra luck!
	I1226 15:18:48.420881    8849 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:18:48.442769    8849 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:18:48.442949    8849 start.go:159] libmachine.API.Create for "offline-docker-595000" (driver="docker")
	I1226 15:18:48.442982    8849 client.go:168] LocalClient.Create starting
	I1226 15:18:48.443156    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:18:48.443245    8849 main.go:141] libmachine: Decoding PEM data...
	I1226 15:18:48.443273    8849 main.go:141] libmachine: Parsing certificate...
	I1226 15:18:48.443346    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:18:48.443414    8849 main.go:141] libmachine: Decoding PEM data...
	I1226 15:18:48.443428    8849 main.go:141] libmachine: Parsing certificate...
	I1226 15:18:48.444294    8849 cli_runner.go:164] Run: docker network inspect offline-docker-595000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:18:48.498896    8849 cli_runner.go:211] docker network inspect offline-docker-595000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:18:48.499000    8849 network_create.go:281] running [docker network inspect offline-docker-595000] to gather additional debugging logs...
	I1226 15:18:48.499016    8849 cli_runner.go:164] Run: docker network inspect offline-docker-595000
	W1226 15:18:48.551546    8849 cli_runner.go:211] docker network inspect offline-docker-595000 returned with exit code 1
	I1226 15:18:48.551581    8849 network_create.go:284] error running [docker network inspect offline-docker-595000]: docker network inspect offline-docker-595000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-595000 not found
	I1226 15:18:48.551603    8849 network_create.go:286] output of [docker network inspect offline-docker-595000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-595000 not found
	
	** /stderr **
	I1226 15:18:48.551756    8849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:18:48.607272    8849 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:18:48.609012    8849 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:18:48.610594    8849 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:18:48.612256    8849 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:18:48.612699    8849 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002161aa0}
	I1226 15:18:48.612711    8849 network_create.go:124] attempt to create docker network offline-docker-595000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1226 15:18:48.612778    8849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-595000 offline-docker-595000
	I1226 15:18:48.704108    8849 network_create.go:108] docker network offline-docker-595000 192.168.85.0/24 created
	I1226 15:18:48.704157    8849 kic.go:121] calculated static IP "192.168.85.2" for the "offline-docker-595000" container
	I1226 15:18:48.704271    8849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:18:48.759416    8849 cli_runner.go:164] Run: docker volume create offline-docker-595000 --label name.minikube.sigs.k8s.io=offline-docker-595000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:18:48.812734    8849 oci.go:103] Successfully created a docker volume offline-docker-595000
	I1226 15:18:48.812844    8849 cli_runner.go:164] Run: docker run --rm --name offline-docker-595000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-595000 --entrypoint /usr/bin/test -v offline-docker-595000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:18:49.151356    8849 oci.go:107] Successfully prepared a docker volume offline-docker-595000
	I1226 15:18:49.151398    8849 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:18:49.151411    8849 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:18:49.151523    8849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-595000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:24:48.499971    8849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:24:48.500119    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:48.554007    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:48.554134    8849 retry.go:31] will retry after 287.987913ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:48.842509    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:48.897485    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:48.897606    8849 retry.go:31] will retry after 305.16411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:49.203496    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:49.257025    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:49.257123    8849 retry.go:31] will retry after 483.254856ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:49.740802    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:49.795995    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	W1226 15:24:49.796101    8849 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	
	W1226 15:24:49.796124    8849 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:49.796188    8849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:24:49.796244    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:49.868549    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:49.868638    8849 retry.go:31] will retry after 265.716285ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:50.136613    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:50.189149    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:50.189249    8849 retry.go:31] will retry after 241.859885ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:50.432764    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:50.486383    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:50.486487    8849 retry.go:31] will retry after 614.906227ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:51.101723    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:51.154604    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:51.154700    8849 retry.go:31] will retry after 595.700206ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:51.752014    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:51.807602    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	W1226 15:24:51.807725    8849 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	
	W1226 15:24:51.807742    8849 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:51.807773    8849 start.go:128] duration metric: createHost completed in 6m3.331140975s
	I1226 15:24:51.807838    8849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:24:51.807899    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:51.861309    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:51.861431    8849 retry.go:31] will retry after 332.994258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:52.196734    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:52.250469    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:52.250569    8849 retry.go:31] will retry after 338.233811ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:52.590980    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:52.643358    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:52.643454    8849 retry.go:31] will retry after 712.438295ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:53.356319    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:53.409294    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	W1226 15:24:53.409393    8849 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	
	W1226 15:24:53.409413    8849 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:53.409469    8849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:24:53.409530    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:53.461862    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:53.462001    8849 retry.go:31] will retry after 276.31308ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:53.738762    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:53.792324    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:53.792419    8849 retry.go:31] will retry after 481.187096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:54.274312    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:54.327633    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	I1226 15:24:54.327732    8849 retry.go:31] will retry after 698.51283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:55.026556    8849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000
	W1226 15:24:55.083688    8849 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000 returned with exit code 1
	W1226 15:24:55.083789    8849 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	
	W1226 15:24:55.083809    8849 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-595000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-595000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000
	I1226 15:24:55.083825    8849 fix.go:56] fixHost completed within 6m21.448653948s
	I1226 15:24:55.083849    8849 start.go:83] releasing machines lock for "offline-docker-595000", held for 6m21.448706766s
	W1226 15:24:55.083937    8849 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-595000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-595000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 15:24:55.127459    8849 out.go:177] 
	W1226 15:24:55.149685    8849 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 15:24:55.149767    8849 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1226 15:24:55.149808    8849 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1226 15:24:55.171250    8849 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-595000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-12-26 15:24:55.247549 -0800 PST m=+6059.499967880
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-595000
helpers_test.go:235: (dbg) docker inspect offline-docker-595000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-595000",
	        "Id": "62f90c5c4b8a9cf9e969daf12c71d30d4ac829fd51d86ed90954abe5bb57be81",
	        "Created": "2023-12-26T23:18:48.66208623Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-595000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-595000 -n offline-docker-595000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-595000 -n offline-docker-595000: exit status 7 (115.044778ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 15:24:55.422567    9533 status.go:249] status error: host: state: unknown state "offline-docker-595000": docker container inspect offline-docker-595000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-595000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-595000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-595000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-595000
--- FAIL: TestOffline (751.40s)

                                                
                                    
x
+
TestCertOptions (7200.748s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-605000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1226 15:38:46.351227    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:39:19.382983    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:39:36.323109    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:43:46.349402    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (6m23s)
	TestCertOptions (5m47s)
	TestNetworkPlugins (31m31s)
	TestNetworkPlugins/group (31m31s)

                                                
                                                
goroutine 2110 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0007eb040, 0xc000977b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0006ea1e0?, {0x527bbe0, 0x2a, 0x2a}, {0x10b0145?, 0xc000068180?, 0x529d420?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0006ea1e0)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 35 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001ad780)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1801 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002687040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002687040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002687040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002687040, 0xc002844480)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1784 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002134680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002134680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002134680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:228 +0x39
testing.tRunner(0xc002134680, 0x3b43408)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1802 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002687380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002687380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002687380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002687380, 0xc002844500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 837 [chan receive, 110 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0028a9e80, 0xc000064f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 750
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 512 [syscall, 5 minutes]:
syscall.syscall6(0x1010585?, 0xc0007e38f8?, 0xc0007e37e8?, 0xc0007e3918?, 0x100c0007e38e0?, 0x1000000000003?, 0x4d2fd5c8?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0007e3890?, 0x1010905?, 0x90?, 0x305b340?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0028609e0?, 0xc0007e38c4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002972240)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0022426e0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002aa0000?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002aa0000, 0xc0022426e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertOptions(0xc002aa0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x40e
testing.tRunner(0xc002aa0000, 0x3b43358)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1693 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002686340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002686340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002686340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002686340, 0x3b43440)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 545 [syscall, 6 minutes]:
syscall.syscall6(0x10106dd?, 0x59cd108?, 0xc000b7d917?, 0xc000b7dab8?, 0x100c000b7da80?, 0x1010000000003?, 0x4d2fd5c8?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000b7da30?, 0x1010905?, 0x90?, 0x305b340?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc002860180?, 0xc000b7da64, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002972000)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002242000)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002aa01a0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002aa01a0, 0xc002242000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertExpiration(0xc002aa01a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2d7
testing.tRunner(0xc002aa01a0, 0x3b43350)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1183 [select, 108 minutes]:
net/http.(*persistConn).writeLoop(0xc002564a20)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1198
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 2079 [IO wait]:
internal/poll.runtime_pollWait(0x4cac0168, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275c060?, 0xc000671ad3?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275c060, {0xc000671ad3, 0x52d, 0x52d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004d6090, {0xc000671ad3?, 0xc0027c0668?, 0xc0027c0668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00223c240, {0x3f91b00, 0xc0004d6090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f91b80, 0xc00223c240}, {0x3f91b00, 0xc0004d6090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002714a80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 545
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1129 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc00256d340, 0xc00256e5a0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1128
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2108 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4cac0e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275cba0?, 0xc000655800?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275cba0, {0xc000655800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004d62c0, {0xc000655800?, 0xc000b55668?, 0xc000b55668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00223c600, {0x3f91b00, 0xc0004d62c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f91b80, 0xc00223c600}, {0x3f91b00, 0xc0004d62c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002714600?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 512
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 2107 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4cac0738, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275cae0?, 0xc0022ef283?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275cae0, {0xc0022ef283, 0x57d, 0x57d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004d6298, {0xc0022ef283?, 0xc000b03500?, 0xc0027bb668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00223c5d0, {0x3f91b00, 0xc0004d6298})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f91b80, 0xc00223c5d0}, {0x3f91b00, 0xc0004d6298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002726420?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 512
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1182 [select, 108 minutes]:
net/http.(*persistConn).readLoop(0xc002564a20)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1198
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 1804 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0026876c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0026876c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026876c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0026876c0, 0xc002844600)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1803 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002687520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002687520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002687520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002687520, 0xc002844580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1175 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc00261fe40, 0xc002453d40)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 737
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 185 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b03200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 186 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b07280, 0xc000064f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1797 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0026869c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0026869c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026869c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0026869c0, 0xc002844280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 189 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000b07250, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f8eaf0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b030e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b07280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f93000, 0xc0006757d0}, 0x1, 0xc000064f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc000b55fd0?, 0x15e8c85?, 0xc000b03200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 186
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 190 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fb5b38, 0xc000064f60}, 0xc00010ef50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fb5b38, 0xc000064f60}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fb5b38?, 0xc000064f60?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 186
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 191 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 631 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0x4cac0640, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002844100?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc002844100)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc002844100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000689080)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc000689080)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc00043e1e0, {0x3fa92c0, 0xc000689080})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc00043e1e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000738680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 628
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 1783 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0021344e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0021344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0021344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:156 +0x86
testing.tRunner(0xc0021344e0, 0x3b43488)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1692 [chan receive, 33 minutes]:
testing.(*T).Run(0xc002686000, {0x30ed0c9?, 0x497946ea52c?}, 0xc0021ae240)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002686000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002686000, 0x3b43438)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1796 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002686820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002686820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002686820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002686820, 0xc002844000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1799 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002686d00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002686d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002686d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002686d00, 0xc002844380)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1798 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002686b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002686b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002686b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002686b60, 0xc002844300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2109 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022426e0, 0xc0027264e0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 512
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1800 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002686ea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002686ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002686ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002686ea0, 0xc002844400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1795
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1795 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0026861a0, 0xc0021ae240)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1692
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1694 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0026864e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0026864e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0026864e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0026864e0, 0x3b43450)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1005 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021b09a0, 0xc0023d6d80)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1004
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1782 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002134340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002134340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002134340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc002134340, 0x3b43460)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2097 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc002242000, 0xc002726060)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 545
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2080 [IO wait]:
internal/poll.runtime_pollWait(0x4cac0358, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275c120?, 0xc00054d463?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275c120, {0xc00054d463, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004d60c0, {0xc00054d463?, 0xc000114668?, 0xc000114668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00223c270, {0x3f91b00, 0xc0004d60c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f91b80, 0xc00223c270}, {0x3f91b00, 0xc0004d60c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002714d80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 545
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1194 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc00256dce0, 0xc00256ed80)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1193
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1785 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002134820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002134820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc002134820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:305 +0xb4
testing.tRunner(0xc002134820, 0x3b43420)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 836 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00280f200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 750
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 868 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fb5b38, 0xc000064f60}, 0xc00010ff50, 0xc0026577f8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fb5b38, 0xc000064f60}, 0x1?, 0x1?, 0xc00010ffb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fb5b38?, 0xc000064f60?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00010ffd0?, 0x117bdc7?, 0xc000b1f020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 837
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 867 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0028a9e50, 0x2b)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f8eaf0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00280f0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0028a9e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f93000, 0xc002222a80}, 0x1, 0xc000064f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0029f1200?, 0x3b9aca00, 0x0, 0xd0?, 0x104475c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bd65?, 0xc0021b1b80?, 0xc000b1f140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 837
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 869 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 868
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1760 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc000c064b0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002134000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002134000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc00084f020?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc002134000, 0x3b43480)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                    
x
+
TestDockerFlags (754.75s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-557000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1226 15:28:46.354906    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:29:36.327378    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:33:29.409855    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:33:46.353068    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:34:36.326863    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-557000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.397221942s)

                                                
                                                
-- stdout --
	* [docker-flags-557000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-557000 in cluster docker-flags-557000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-557000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 15:25:33.985497    9679 out.go:296] Setting OutFile to fd 1 ...
	I1226 15:25:33.985795    9679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:25:33.985801    9679 out.go:309] Setting ErrFile to fd 2...
	I1226 15:25:33.985805    9679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:25:33.985985    9679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 15:25:33.987643    9679 out.go:303] Setting JSON to false
	I1226 15:25:34.010915    9679 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6903,"bootTime":1703626230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 15:25:34.011014    9679 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 15:25:34.032932    9679 out.go:177] * [docker-flags-557000] minikube v1.32.0 on Darwin 14.2.1
	I1226 15:25:34.054678    9679 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 15:25:34.054774    9679 notify.go:220] Checking for updates...
	I1226 15:25:34.075784    9679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 15:25:34.097705    9679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 15:25:34.119673    9679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 15:25:34.140671    9679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 15:25:34.161688    9679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 15:25:34.183460    9679 config.go:182] Loaded profile config "force-systemd-flag-051000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 15:25:34.183622    9679 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 15:25:34.243971    9679 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 15:25:34.244131    9679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:25:34.350468    9679 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-12-26 23:25:34.338787454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:25:34.372158    9679 out.go:177] * Using the docker driver based on user configuration
	I1226 15:25:34.394089    9679 start.go:298] selected driver: docker
	I1226 15:25:34.394110    9679 start.go:902] validating driver "docker" against <nil>
	I1226 15:25:34.394123    9679 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 15:25:34.398496    9679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:25:34.505211    9679 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-12-26 23:25:34.494766271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:25:34.505386    9679 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 15:25:34.505570    9679 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1226 15:25:34.526731    9679 out.go:177] * Using Docker Desktop driver with root privileges
	I1226 15:25:34.548651    9679 cni.go:84] Creating CNI manager for ""
	I1226 15:25:34.548687    9679 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 15:25:34.548715    9679 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1226 15:25:34.548729    9679 start_flags.go:323] config:
	{Name:docker-flags-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1226 15:25:34.570506    9679 out.go:177] * Starting control plane node docker-flags-557000 in cluster docker-flags-557000
	I1226 15:25:34.612512    9679 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 15:25:34.634547    9679 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 15:25:34.676623    9679 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:25:34.676686    9679 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 15:25:34.676708    9679 cache.go:56] Caching tarball of preloaded images
	I1226 15:25:34.676718    9679 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 15:25:34.676916    9679 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 15:25:34.676934    9679 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 15:25:34.677141    9679 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/docker-flags-557000/config.json ...
	I1226 15:25:34.677819    9679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/docker-flags-557000/config.json: {Name:mk89cdb8e7e0e69e43c6def0d1f7cb33991eb178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 15:25:34.731660    9679 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 15:25:34.731709    9679 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 15:25:34.731730    9679 cache.go:194] Successfully downloaded all kic artifacts
	I1226 15:25:34.731778    9679 start.go:365] acquiring machines lock for docker-flags-557000: {Name:mk2e5f1835773a2eaa2cfd1b1714af6efc1d6495 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:25:34.731937    9679 start.go:369] acquired machines lock for "docker-flags-557000" in 143.231µs
	I1226 15:25:34.731965    9679 start.go:93] Provisioning new machine with config: &{Name:docker-flags-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-557000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 15:25:34.732033    9679 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:25:34.754273    9679 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:25:34.754673    9679 start.go:159] libmachine.API.Create for "docker-flags-557000" (driver="docker")
	I1226 15:25:34.754726    9679 client.go:168] LocalClient.Create starting
	I1226 15:25:34.754872    9679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:25:34.754958    9679 main.go:141] libmachine: Decoding PEM data...
	I1226 15:25:34.754991    9679 main.go:141] libmachine: Parsing certificate...
	I1226 15:25:34.755087    9679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:25:34.755160    9679 main.go:141] libmachine: Decoding PEM data...
	I1226 15:25:34.755176    9679 main.go:141] libmachine: Parsing certificate...
	I1226 15:25:34.755882    9679 cli_runner.go:164] Run: docker network inspect docker-flags-557000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:25:34.811990    9679 cli_runner.go:211] docker network inspect docker-flags-557000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:25:34.812088    9679 network_create.go:281] running [docker network inspect docker-flags-557000] to gather additional debugging logs...
	I1226 15:25:34.812110    9679 cli_runner.go:164] Run: docker network inspect docker-flags-557000
	W1226 15:25:34.865903    9679 cli_runner.go:211] docker network inspect docker-flags-557000 returned with exit code 1
	I1226 15:25:34.865951    9679 network_create.go:284] error running [docker network inspect docker-flags-557000]: docker network inspect docker-flags-557000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-557000 not found
	I1226 15:25:34.865971    9679 network_create.go:286] output of [docker network inspect docker-flags-557000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-557000 not found
	
	** /stderr **
	I1226 15:25:34.866125    9679 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:25:34.921821    9679 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:25:34.923510    9679 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:25:34.925116    9679 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:25:34.925469    9679 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002408a20}
	I1226 15:25:34.925484    9679 network_create.go:124] attempt to create docker network docker-flags-557000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1226 15:25:34.925551    9679 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-557000 docker-flags-557000
	I1226 15:25:35.104822    9679 network_create.go:108] docker network docker-flags-557000 192.168.76.0/24 created
	I1226 15:25:35.104884    9679 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-557000" container
	I1226 15:25:35.105010    9679 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:25:35.163206    9679 cli_runner.go:164] Run: docker volume create docker-flags-557000 --label name.minikube.sigs.k8s.io=docker-flags-557000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:25:35.218304    9679 oci.go:103] Successfully created a docker volume docker-flags-557000
	I1226 15:25:35.218431    9679 cli_runner.go:164] Run: docker run --rm --name docker-flags-557000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-557000 --entrypoint /usr/bin/test -v docker-flags-557000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:25:35.598840    9679 oci.go:107] Successfully prepared a docker volume docker-flags-557000
	I1226 15:25:35.598880    9679 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:25:35.598893    9679 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:25:35.599023    9679 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-557000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:31:34.753715    9679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:31:34.753854    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:34.811189    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:31:34.811318    9679 retry.go:31] will retry after 166.290545ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:34.978022    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:35.032294    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:31:35.032408    9679 retry.go:31] will retry after 457.930346ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:35.490769    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:35.547670    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:31:35.547782    9679 retry.go:31] will retry after 794.192737ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:36.342551    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:36.397726    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	W1226 15:31:36.397830    9679 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	W1226 15:31:36.397854    9679 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:36.397915    9679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:31:36.397984    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:36.451780    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:31:36.451893    9679 retry.go:31] will retry after 333.63206ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:36.785909    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:36.839766    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:31:36.839876    9679 retry.go:31] will retry after 510.160753ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:37.352363    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:37.405298    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:31:37.405400    9679 retry.go:31] will retry after 370.475689ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:37.776866    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:31:37.829903    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	W1226 15:31:37.830003    9679 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	W1226 15:31:37.830026    9679 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:37.830042    9679 start.go:128] duration metric: createHost completed in 6m3.10020056s
	I1226 15:31:37.830049    9679 start.go:83] releasing machines lock for "docker-flags-557000", held for 6m3.100315728s
	W1226 15:31:37.830061    9679 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1226 15:31:37.830553    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:37.886131    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:37.886189    9679 delete.go:82] Unable to get host status for docker-flags-557000, assuming it has already been deleted: state: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	W1226 15:31:37.886277    9679 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1226 15:31:37.886286    9679 start.go:709] Will try again in 5 seconds ...
	I1226 15:31:42.886534    9679 start.go:365] acquiring machines lock for docker-flags-557000: {Name:mk2e5f1835773a2eaa2cfd1b1714af6efc1d6495 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:31:42.887443    9679 start.go:369] acquired machines lock for "docker-flags-557000" in 861.884µs
	I1226 15:31:42.887541    9679 start.go:96] Skipping create...Using existing machine configuration
	I1226 15:31:42.887559    9679 fix.go:54] fixHost starting: 
	I1226 15:31:42.888012    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:42.941117    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:42.941176    9679 fix.go:102] recreateIfNeeded on docker-flags-557000: state= err=unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:42.941207    9679 fix.go:107] machineExists: false. err=machine does not exist
	I1226 15:31:42.962949    9679 out.go:177] * docker "docker-flags-557000" container is missing, will recreate.
	I1226 15:31:43.005535    9679 delete.go:124] DEMOLISHING docker-flags-557000 ...
	I1226 15:31:43.005745    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:43.060446    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	W1226 15:31:43.060489    9679 stop.go:75] unable to get state: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:43.060509    9679 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:43.060907    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:43.114306    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:43.114374    9679 delete.go:82] Unable to get host status for docker-flags-557000, assuming it has already been deleted: state: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:43.114458    9679 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-557000
	W1226 15:31:43.167753    9679 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-557000 returned with exit code 1
	I1226 15:31:43.167802    9679 kic.go:371] could not find the container docker-flags-557000 to remove it. will try anyways
	I1226 15:31:43.167941    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:43.221137    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	W1226 15:31:43.221186    9679 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:43.221285    9679 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-557000 /bin/bash -c "sudo init 0"
	W1226 15:31:43.274028    9679 cli_runner.go:211] docker exec --privileged -t docker-flags-557000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 15:31:43.274061    9679 oci.go:650] error shutdown docker-flags-557000: docker exec --privileged -t docker-flags-557000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:44.275374    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:44.329052    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:44.329100    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:44.329119    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:44.329142    9679 retry.go:31] will retry after 673.730334ms: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:45.003836    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:45.060747    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:45.060801    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:45.060813    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:45.060838    9679 retry.go:31] will retry after 942.949953ms: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:46.004218    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:46.059478    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:46.059537    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:46.059547    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:46.059589    9679 retry.go:31] will retry after 874.508502ms: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:46.934922    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:46.988809    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:46.988859    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:46.988869    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:46.988893    9679 retry.go:31] will retry after 1.368224467s: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:48.357501    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:48.413221    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:48.413283    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:48.413292    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:48.413317    9679 retry.go:31] will retry after 2.734220359s: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:51.148093    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:51.202553    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:51.202601    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:51.202612    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:51.202639    9679 retry.go:31] will retry after 4.623024987s: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:55.826036    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:31:55.879760    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:55.879811    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:31:55.879821    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:31:55.879846    9679 retry.go:31] will retry after 4.201358379s: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:32:00.081487    9679 cli_runner.go:164] Run: docker container inspect docker-flags-557000 --format={{.State.Status}}
	W1226 15:32:00.137439    9679 cli_runner.go:211] docker container inspect docker-flags-557000 --format={{.State.Status}} returned with exit code 1
	I1226 15:32:00.137486    9679 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:32:00.137499    9679 oci.go:664] temporary error: container docker-flags-557000 status is  but expect it to be exited
	I1226 15:32:00.137531    9679 oci.go:88] couldn't shut down docker-flags-557000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	 
	I1226 15:32:00.137609    9679 cli_runner.go:164] Run: docker rm -f -v docker-flags-557000
	I1226 15:32:00.193958    9679 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-557000
	W1226 15:32:00.247076    9679 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-557000 returned with exit code 1
	I1226 15:32:00.247192    9679 cli_runner.go:164] Run: docker network inspect docker-flags-557000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:32:00.303291    9679 cli_runner.go:164] Run: docker network rm docker-flags-557000
	I1226 15:32:00.406456    9679 fix.go:114] Sleeping 1 second for extra luck!
	I1226 15:32:01.406639    9679 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:32:01.429599    9679 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:32:01.429773    9679 start.go:159] libmachine.API.Create for "docker-flags-557000" (driver="docker")
	I1226 15:32:01.429809    9679 client.go:168] LocalClient.Create starting
	I1226 15:32:01.430112    9679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:32:01.430211    9679 main.go:141] libmachine: Decoding PEM data...
	I1226 15:32:01.430239    9679 main.go:141] libmachine: Parsing certificate...
	I1226 15:32:01.430322    9679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:32:01.430390    9679 main.go:141] libmachine: Decoding PEM data...
	I1226 15:32:01.430406    9679 main.go:141] libmachine: Parsing certificate...
	I1226 15:32:01.452187    9679 cli_runner.go:164] Run: docker network inspect docker-flags-557000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:32:01.507621    9679 cli_runner.go:211] docker network inspect docker-flags-557000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:32:01.507729    9679 network_create.go:281] running [docker network inspect docker-flags-557000] to gather additional debugging logs...
	I1226 15:32:01.507751    9679 cli_runner.go:164] Run: docker network inspect docker-flags-557000
	W1226 15:32:01.561623    9679 cli_runner.go:211] docker network inspect docker-flags-557000 returned with exit code 1
	I1226 15:32:01.561669    9679 network_create.go:284] error running [docker network inspect docker-flags-557000]: docker network inspect docker-flags-557000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-557000 not found
	I1226 15:32:01.561688    9679 network_create.go:286] output of [docker network inspect docker-flags-557000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-557000 not found
	
	** /stderr **
	I1226 15:32:01.561842    9679 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:32:01.617634    9679 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:32:01.619241    9679 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:32:01.620892    9679 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:32:01.622378    9679 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:32:01.623913    9679 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:32:01.624565    9679 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00049df50}
	I1226 15:32:01.624583    9679 network_create.go:124] attempt to create docker network docker-flags-557000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1226 15:32:01.624716    9679 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-557000 docker-flags-557000
	I1226 15:32:01.716347    9679 network_create.go:108] docker network docker-flags-557000 192.168.94.0/24 created
	I1226 15:32:01.716390    9679 kic.go:121] calculated static IP "192.168.94.2" for the "docker-flags-557000" container
	I1226 15:32:01.716496    9679 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:32:01.771955    9679 cli_runner.go:164] Run: docker volume create docker-flags-557000 --label name.minikube.sigs.k8s.io=docker-flags-557000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:32:01.823765    9679 oci.go:103] Successfully created a docker volume docker-flags-557000
	I1226 15:32:01.823891    9679 cli_runner.go:164] Run: docker run --rm --name docker-flags-557000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-557000 --entrypoint /usr/bin/test -v docker-flags-557000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:32:02.154027    9679 oci.go:107] Successfully prepared a docker volume docker-flags-557000
	I1226 15:32:02.154064    9679 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:32:02.154077    9679 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:32:02.154185    9679 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-557000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:38:01.429999    9679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:38:01.430148    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:01.484795    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:01.484902    9679 retry.go:31] will retry after 331.731433ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:01.818540    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:01.872114    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:01.872231    9679 retry.go:31] will retry after 445.933536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:02.318577    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:02.372189    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:02.372299    9679 retry.go:31] will retry after 305.635893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:02.678328    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:02.731268    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	W1226 15:38:02.731369    9679 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	W1226 15:38:02.731435    9679 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:02.731518    9679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:38:02.731588    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:02.787477    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:02.787646    9679 retry.go:31] will retry after 274.393149ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:03.064068    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:03.126281    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:03.126396    9679 retry.go:31] will retry after 213.441242ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:03.340186    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:03.394526    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:03.394624    9679 retry.go:31] will retry after 768.250737ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:04.163171    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:04.219290    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	W1226 15:38:04.219401    9679 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	W1226 15:38:04.219433    9679 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:04.219455    9679 start.go:128] duration metric: createHost completed in 6m2.815004268s
	I1226 15:38:04.219525    9679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:38:04.219582    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:04.272813    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:04.272913    9679 retry.go:31] will retry after 221.085141ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:04.495459    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:04.549622    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:04.549722    9679 retry.go:31] will retry after 360.503304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:04.912575    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:04.966786    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:04.966877    9679 retry.go:31] will retry after 508.996795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:05.476607    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:05.530477    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	W1226 15:38:05.530575    9679 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	W1226 15:38:05.530593    9679 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:05.530655    9679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:38:05.530720    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:05.583253    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:05.583371    9679 retry.go:31] will retry after 285.358859ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:05.871091    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:05.926297    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:05.926450    9679 retry.go:31] will retry after 505.916616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:06.432835    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:06.487696    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	I1226 15:38:06.487785    9679 retry.go:31] will retry after 648.481298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:07.137172    9679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000
	W1226 15:38:07.191068    9679 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000 returned with exit code 1
	W1226 15:38:07.191170    9679 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	W1226 15:38:07.191187    9679 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-557000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-557000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	I1226 15:38:07.191199    9679 fix.go:56] fixHost completed within 6m24.30598288s
	I1226 15:38:07.191205    9679 start.go:83] releasing machines lock for "docker-flags-557000", held for 6m24.306032155s
	W1226 15:38:07.191281    9679 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-557000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-557000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 15:38:07.234585    9679 out.go:177] 
	W1226 15:38:07.255741    9679 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 15:38:07.255794    9679 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1226 15:38:07.255844    9679 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1226 15:38:07.277602    9679 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-557000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-557000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-557000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (209.166927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-557000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-557000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-557000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (203.130571ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-557000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-557000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-12-26 15:38:07.7457 -0800 PST m=+6852.002943722
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-557000
helpers_test.go:235: (dbg) docker inspect docker-flags-557000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-557000",
	        "Id": "447cf43cea170202e5377fc794fe730bee9c59adc06feb83045d5fa31bec9369",
	        "Created": "2023-12-26T23:32:01.675163569Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-557000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-557000 -n docker-flags-557000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-557000 -n docker-flags-557000: exit status 7 (113.210138ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 15:38:07.913689   10291 status.go:249] status error: host: state: unknown state "docker-flags-557000": docker container inspect docker-flags-557000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-557000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-557000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-557000
--- FAIL: TestDockerFlags (754.75s)

                                                
                                    
x
+
TestForceSystemdFlag (756.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-051000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-051000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.714950544s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-051000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-051000 in cluster force-systemd-flag-051000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-051000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 15:24:56.237729    9557 out.go:296] Setting OutFile to fd 1 ...
	I1226 15:24:56.237928    9557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:24:56.237934    9557 out.go:309] Setting ErrFile to fd 2...
	I1226 15:24:56.237939    9557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:24:56.238121    9557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 15:24:56.239756    9557 out.go:303] Setting JSON to false
	I1226 15:24:56.263638    9557 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6866,"bootTime":1703626230,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 15:24:56.263733    9557 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 15:24:56.292686    9557 out.go:177] * [force-systemd-flag-051000] minikube v1.32.0 on Darwin 14.2.1
	I1226 15:24:56.313714    9557 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 15:24:56.313809    9557 notify.go:220] Checking for updates...
	I1226 15:24:56.356792    9557 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 15:24:56.377654    9557 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 15:24:56.398718    9557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 15:24:56.419643    9557 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 15:24:56.440541    9557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 15:24:56.462619    9557 config.go:182] Loaded profile config "force-systemd-env-026000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 15:24:56.462780    9557 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 15:24:56.523591    9557 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 15:24:56.523741    9557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:24:56.630685    9557 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-12-26 23:24:56.620350799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:24:56.652368    9557 out.go:177] * Using the docker driver based on user configuration
	I1226 15:24:56.694322    9557 start.go:298] selected driver: docker
	I1226 15:24:56.694371    9557 start.go:902] validating driver "docker" against <nil>
	I1226 15:24:56.694385    9557 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 15:24:56.698791    9557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:24:56.805026    9557 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-12-26 23:24:56.794548252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unc
onfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:M
anages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugin
s/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:24:56.805200    9557 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 15:24:56.805409    9557 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1226 15:24:56.826433    9557 out.go:177] * Using Docker Desktop driver with root privileges
	I1226 15:24:56.847256    9557 cni.go:84] Creating CNI manager for ""
	I1226 15:24:56.847291    9557 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 15:24:56.847321    9557 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1226 15:24:56.847336    9557 start_flags.go:323] config:
	{Name:force-systemd-flag-051000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-051000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 15:24:56.869237    9557 out.go:177] * Starting control plane node force-systemd-flag-051000 in cluster force-systemd-flag-051000
	I1226 15:24:56.890292    9557 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 15:24:56.911086    9557 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 15:24:56.953218    9557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:24:56.953282    9557 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 15:24:56.953304    9557 cache.go:56] Caching tarball of preloaded images
	I1226 15:24:56.953317    9557 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 15:24:56.953524    9557 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 15:24:56.953542    9557 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 15:24:56.953648    9557 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/force-systemd-flag-051000/config.json ...
	I1226 15:24:56.953680    9557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/force-systemd-flag-051000/config.json: {Name:mke0c03d0e458b854270047f42e990803f91789b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 15:24:57.009782    9557 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 15:24:57.009831    9557 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 15:24:57.009851    9557 cache.go:194] Successfully downloaded all kic artifacts
	I1226 15:24:57.009915    9557 start.go:365] acquiring machines lock for force-systemd-flag-051000: {Name:mk61e543f3cada2792cbe66c8d45f212f3035bec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:24:57.010085    9557 start.go:369] acquired machines lock for "force-systemd-flag-051000" in 154.913µs
	I1226 15:24:57.010112    9557 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-051000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-051000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 15:24:57.010214    9557 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:24:57.031746    9557 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:24:57.032137    9557 start.go:159] libmachine.API.Create for "force-systemd-flag-051000" (driver="docker")
	I1226 15:24:57.032212    9557 client.go:168] LocalClient.Create starting
	I1226 15:24:57.032399    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:24:57.032485    9557 main.go:141] libmachine: Decoding PEM data...
	I1226 15:24:57.032513    9557 main.go:141] libmachine: Parsing certificate...
	I1226 15:24:57.032616    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:24:57.032684    9557 main.go:141] libmachine: Decoding PEM data...
	I1226 15:24:57.032701    9557 main.go:141] libmachine: Parsing certificate...
	I1226 15:24:57.033725    9557 cli_runner.go:164] Run: docker network inspect force-systemd-flag-051000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:24:57.091485    9557 cli_runner.go:211] docker network inspect force-systemd-flag-051000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:24:57.091592    9557 network_create.go:281] running [docker network inspect force-systemd-flag-051000] to gather additional debugging logs...
	I1226 15:24:57.091609    9557 cli_runner.go:164] Run: docker network inspect force-systemd-flag-051000
	W1226 15:24:57.146525    9557 cli_runner.go:211] docker network inspect force-systemd-flag-051000 returned with exit code 1
	I1226 15:24:57.146554    9557 network_create.go:284] error running [docker network inspect force-systemd-flag-051000]: docker network inspect force-systemd-flag-051000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-051000 not found
	I1226 15:24:57.146568    9557 network_create.go:286] output of [docker network inspect force-systemd-flag-051000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-051000 not found
	
	** /stderr **
	I1226 15:24:57.146740    9557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:24:57.202075    9557 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:24:57.202530    9557 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021c1e30}
	I1226 15:24:57.202548    9557 network_create.go:124] attempt to create docker network force-systemd-flag-051000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1226 15:24:57.202615    9557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-051000 force-systemd-flag-051000
	W1226 15:24:57.256734    9557 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-051000 force-systemd-flag-051000 returned with exit code 1
	W1226 15:24:57.256779    9557 network_create.go:149] failed to create docker network force-systemd-flag-051000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-051000 force-systemd-flag-051000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1226 15:24:57.256795    9557 network_create.go:116] failed to create docker network force-systemd-flag-051000 192.168.58.0/24, will retry: subnet is taken
	I1226 15:24:57.258212    9557 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:24:57.258583    9557 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022ee5a0}
	I1226 15:24:57.258597    9557 network_create.go:124] attempt to create docker network force-systemd-flag-051000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1226 15:24:57.258657    9557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-051000 force-systemd-flag-051000
	I1226 15:24:57.351604    9557 network_create.go:108] docker network force-systemd-flag-051000 192.168.67.0/24 created
	I1226 15:24:57.351641    9557 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-051000" container
	I1226 15:24:57.351771    9557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:24:57.409816    9557 cli_runner.go:164] Run: docker volume create force-systemd-flag-051000 --label name.minikube.sigs.k8s.io=force-systemd-flag-051000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:24:57.464080    9557 oci.go:103] Successfully created a docker volume force-systemd-flag-051000
	I1226 15:24:57.464196    9557 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-051000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-051000 --entrypoint /usr/bin/test -v force-systemd-flag-051000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:24:57.852093    9557 oci.go:107] Successfully prepared a docker volume force-systemd-flag-051000
	I1226 15:24:57.852135    9557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:24:57.852148    9557 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:24:57.852284    9557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-051000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:30:57.030629    9557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:30:57.030766    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:57.084406    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:30:57.084511    9557 retry.go:31] will retry after 369.203544ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:30:57.455992    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:57.508865    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:30:57.508984    9557 retry.go:31] will retry after 384.793379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:30:57.896137    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:57.951661    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:30:57.951803    9557 retry.go:31] will retry after 828.390513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:30:58.780904    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:58.835850    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	W1226 15:30:58.835985    9557 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	W1226 15:30:58.836036    9557 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:30:58.836114    9557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:30:58.836191    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:58.889292    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:30:58.889388    9557 retry.go:31] will retry after 296.766297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:30:59.186558    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:59.239729    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:30:59.239824    9557 retry.go:31] will retry after 536.598248ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:30:59.776822    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:30:59.830310    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:30:59.830405    9557 retry.go:31] will retry after 755.500081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:00.586722    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:31:00.639798    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	W1226 15:31:00.639914    9557 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	W1226 15:31:00.639931    9557 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:00.639955    9557 start.go:128] duration metric: createHost completed in 6m3.631944134s
	I1226 15:31:00.639985    9557 start.go:83] releasing machines lock for "force-systemd-flag-051000", held for 6m3.632106242s
	W1226 15:31:00.639998    9557 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1226 15:31:00.640441    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:00.693204    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:00.693260    9557 delete.go:82] Unable to get host status for force-systemd-flag-051000, assuming it has already been deleted: state: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	W1226 15:31:00.693339    9557 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1226 15:31:00.693351    9557 start.go:709] Will try again in 5 seconds ...
	I1226 15:31:05.693480    9557 start.go:365] acquiring machines lock for force-systemd-flag-051000: {Name:mk61e543f3cada2792cbe66c8d45f212f3035bec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:31:05.694244    9557 start.go:369] acquired machines lock for "force-systemd-flag-051000" in 96.289µs
	I1226 15:31:05.694271    9557 start.go:96] Skipping create...Using existing machine configuration
	I1226 15:31:05.694283    9557 fix.go:54] fixHost starting: 
	I1226 15:31:05.694662    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:05.748230    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:05.748279    9557 fix.go:102] recreateIfNeeded on force-systemd-flag-051000: state= err=unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:05.748304    9557 fix.go:107] machineExists: false. err=machine does not exist
	I1226 15:31:05.770021    9557 out.go:177] * docker "force-systemd-flag-051000" container is missing, will recreate.
	I1226 15:31:05.813782    9557 delete.go:124] DEMOLISHING force-systemd-flag-051000 ...
	I1226 15:31:05.813935    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:05.868437    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	W1226 15:31:05.868536    9557 stop.go:75] unable to get state: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:05.868578    9557 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:05.868986    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:05.922587    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:05.922639    9557 delete.go:82] Unable to get host status for force-systemd-flag-051000, assuming it has already been deleted: state: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:05.922736    9557 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-051000
	W1226 15:31:05.977276    9557 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-051000 returned with exit code 1
	I1226 15:31:05.977319    9557 kic.go:371] could not find the container force-systemd-flag-051000 to remove it. will try anyways
	I1226 15:31:05.977401    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:06.032121    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	W1226 15:31:06.032183    9557 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:06.032273    9557 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-051000 /bin/bash -c "sudo init 0"
	W1226 15:31:06.085582    9557 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-051000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 15:31:06.085620    9557 oci.go:650] error shutdown force-systemd-flag-051000: docker exec --privileged -t force-systemd-flag-051000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:07.086009    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:07.139260    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:07.139326    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:07.139340    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:07.139366    9557 retry.go:31] will retry after 596.98242ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:07.736624    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:07.790349    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:07.790417    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:07.790431    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:07.790455    9557 retry.go:31] will retry after 1.070605599s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:08.861385    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:08.914051    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:08.914102    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:08.914117    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:08.914141    9557 retry.go:31] will retry after 706.328056ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:09.621053    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:09.674690    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:09.674744    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:09.674758    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:09.674782    9557 retry.go:31] will retry after 1.548782327s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:11.223802    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:11.277582    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:11.277642    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:11.277654    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:11.277681    9557 retry.go:31] will retry after 3.292487824s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:14.570903    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:14.625525    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:14.625576    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:14.625587    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:14.625613    9557 retry.go:31] will retry after 5.013394751s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:19.640575    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:19.694854    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:19.694901    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:19.694912    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:19.694938    9557 retry.go:31] will retry after 5.131998564s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:24.828028    9557 cli_runner.go:164] Run: docker container inspect force-systemd-flag-051000 --format={{.State.Status}}
	W1226 15:31:24.883510    9557 cli_runner.go:211] docker container inspect force-systemd-flag-051000 --format={{.State.Status}} returned with exit code 1
	I1226 15:31:24.883563    9557 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:31:24.883574    9557 oci.go:664] temporary error: container force-systemd-flag-051000 status is  but expect it to be exited
	I1226 15:31:24.883605    9557 oci.go:88] couldn't shut down force-systemd-flag-051000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	 
	I1226 15:31:24.883685    9557 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-051000
	I1226 15:31:24.935775    9557 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-051000
	W1226 15:31:24.988753    9557 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-051000 returned with exit code 1
	I1226 15:31:24.988887    9557 cli_runner.go:164] Run: docker network inspect force-systemd-flag-051000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:31:25.043795    9557 cli_runner.go:164] Run: docker network rm force-systemd-flag-051000
	I1226 15:31:25.144725    9557 fix.go:114] Sleeping 1 second for extra luck!
	I1226 15:31:26.144976    9557 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:31:26.168156    9557 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:31:26.168347    9557 start.go:159] libmachine.API.Create for "force-systemd-flag-051000" (driver="docker")
	I1226 15:31:26.168421    9557 client.go:168] LocalClient.Create starting
	I1226 15:31:26.168632    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:31:26.168721    9557 main.go:141] libmachine: Decoding PEM data...
	I1226 15:31:26.168746    9557 main.go:141] libmachine: Parsing certificate...
	I1226 15:31:26.168824    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:31:26.168891    9557 main.go:141] libmachine: Decoding PEM data...
	I1226 15:31:26.168908    9557 main.go:141] libmachine: Parsing certificate...
	I1226 15:31:26.169556    9557 cli_runner.go:164] Run: docker network inspect force-systemd-flag-051000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:31:26.223482    9557 cli_runner.go:211] docker network inspect force-systemd-flag-051000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:31:26.223571    9557 network_create.go:281] running [docker network inspect force-systemd-flag-051000] to gather additional debugging logs...
	I1226 15:31:26.223604    9557 cli_runner.go:164] Run: docker network inspect force-systemd-flag-051000
	W1226 15:31:26.276784    9557 cli_runner.go:211] docker network inspect force-systemd-flag-051000 returned with exit code 1
	I1226 15:31:26.276811    9557 network_create.go:284] error running [docker network inspect force-systemd-flag-051000]: docker network inspect force-systemd-flag-051000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-051000 not found
	I1226 15:31:26.276829    9557 network_create.go:286] output of [docker network inspect force-systemd-flag-051000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-051000 not found
	
	** /stderr **
	I1226 15:31:26.276987    9557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:31:26.331275    9557 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:31:26.332683    9557 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:31:26.334215    9557 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:31:26.335601    9557 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:31:26.335945    9557 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00049da40}
	I1226 15:31:26.335960    9557 network_create.go:124] attempt to create docker network force-systemd-flag-051000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1226 15:31:26.336027    9557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-051000 force-systemd-flag-051000
	I1226 15:31:26.428656    9557 network_create.go:108] docker network force-systemd-flag-051000 192.168.85.0/24 created
	I1226 15:31:26.428723    9557 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-051000" container
	I1226 15:31:26.428845    9557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:31:26.486690    9557 cli_runner.go:164] Run: docker volume create force-systemd-flag-051000 --label name.minikube.sigs.k8s.io=force-systemd-flag-051000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:31:26.540719    9557 oci.go:103] Successfully created a docker volume force-systemd-flag-051000
	I1226 15:31:26.540906    9557 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-051000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-051000 --entrypoint /usr/bin/test -v force-systemd-flag-051000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:31:26.847049    9557 oci.go:107] Successfully prepared a docker volume force-systemd-flag-051000
	I1226 15:31:26.847086    9557 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:31:26.847098    9557 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:31:26.847204    9557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-051000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:37:26.166413    9557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:37:26.166486    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:26.218809    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:26.218947    9557 retry.go:31] will retry after 282.185784ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:26.501951    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:26.554771    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:26.554875    9557 retry.go:31] will retry after 430.6084ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:26.987470    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:27.040534    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:27.040645    9557 retry.go:31] will retry after 348.691132ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:27.390555    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:27.447525    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	W1226 15:37:27.447679    9557 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	W1226 15:37:27.447731    9557 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:27.447831    9557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:37:27.447915    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:27.500731    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:27.500835    9557 retry.go:31] will retry after 203.000921ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:27.704828    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:27.757798    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:27.757896    9557 retry.go:31] will retry after 356.601672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:28.115066    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:28.169607    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:28.169734    9557 retry.go:31] will retry after 791.025833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:28.962602    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:29.016108    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	W1226 15:37:29.016213    9557 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	W1226 15:37:29.016232    9557 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:29.016242    9557 start.go:128] duration metric: createHost completed in 6m2.873453916s
	I1226 15:37:29.016311    9557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:37:29.016380    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:29.072505    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:29.072615    9557 retry.go:31] will retry after 193.140496ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:29.266563    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:29.320616    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:29.320734    9557 retry.go:31] will retry after 316.319249ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:29.637391    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:29.690665    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:29.690782    9557 retry.go:31] will retry after 677.443709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:30.368617    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:30.422181    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	W1226 15:37:30.422282    9557 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	W1226 15:37:30.422301    9557 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:30.422357    9557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:37:30.422442    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:30.474542    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:30.474640    9557 retry.go:31] will retry after 349.059887ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:30.823983    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:30.878502    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:30.878600    9557 retry.go:31] will retry after 271.182234ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:31.150223    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:31.203257    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	I1226 15:37:31.203350    9557 retry.go:31] will retry after 458.03672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:31.661761    9557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000
	W1226 15:37:31.713091    9557 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000 returned with exit code 1
	W1226 15:37:31.713195    9557 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	W1226 15:37:31.713210    9557 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-051000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-051000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	I1226 15:37:31.713227    9557 fix.go:56] fixHost completed within 6m26.021295674s
	I1226 15:37:31.713236    9557 start.go:83] releasing machines lock for "force-systemd-flag-051000", held for 6m26.021332986s
	W1226 15:37:31.713332    9557 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-051000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-051000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 15:37:31.756449    9557 out.go:177] 
	W1226 15:37:31.785799    9557 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 15:37:31.785902    9557 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1226 15:37:31.785940    9557 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1226 15:37:31.826953    9557 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-051000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-051000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-051000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (203.735112ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-051000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-12-26 15:37:32.109611 -0800 PST m=+6816.366637393
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-051000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-051000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-051000",
	        "Id": "0b3bcd7d47e9c6eaa981548df97e8367652e3ad2a531cf288879bb02def52b3e",
	        "Created": "2023-12-26T23:31:26.385799129Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-051000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-051000 -n force-systemd-flag-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-051000 -n force-systemd-flag-051000: exit status 7 (113.155457ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 15:37:32.280042   10167 status.go:249] status error: host: state: unknown state "force-systemd-flag-051000": docker container inspect force-systemd-flag-051000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-051000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-051000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-051000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-051000
--- FAIL: TestForceSystemdFlag (756.86s)

                                                
                                    
x
+
TestForceSystemdEnv (756.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-026000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1226 15:13:46.304532    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:14:36.275148    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:16:49.357320    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:18:46.300655    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:19:36.272438    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:22:39.388764    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:23:46.356680    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:24:36.328484    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-026000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.869844285s)

                                                
                                                
-- stdout --
	* [force-systemd-env-026000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-026000 in cluster force-systemd-env-026000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-026000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 15:12:57.915938    9071 out.go:296] Setting OutFile to fd 1 ...
	I1226 15:12:57.916140    9071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:12:57.916147    9071 out.go:309] Setting ErrFile to fd 2...
	I1226 15:12:57.916151    9071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 15:12:57.916335    9071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 15:12:57.917960    9071 out.go:303] Setting JSON to false
	I1226 15:12:57.941415    9071 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6147,"bootTime":1703626230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 15:12:57.941505    9071 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 15:12:57.963321    9071 out.go:177] * [force-systemd-env-026000] minikube v1.32.0 on Darwin 14.2.1
	I1226 15:12:57.984058    9071 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 15:12:57.984204    9071 notify.go:220] Checking for updates...
	I1226 15:12:58.027907    9071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 15:12:58.049924    9071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 15:12:58.070999    9071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 15:12:58.091949    9071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 15:12:58.112972    9071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1226 15:12:58.134755    9071 config.go:182] Loaded profile config "offline-docker-595000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 15:12:58.134911    9071 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 15:12:58.196725    9071 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 15:12:58.196868    9071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:12:58.303679    9071 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-12-26 23:12:58.293412123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:12:58.347163    9071 out.go:177] * Using the docker driver based on user configuration
	I1226 15:12:58.368076    9071 start.go:298] selected driver: docker
	I1226 15:12:58.368099    9071 start.go:902] validating driver "docker" against <nil>
	I1226 15:12:58.368113    9071 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 15:12:58.372551    9071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 15:12:58.481300    9071 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-12-26 23:12:58.471012831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unco
nfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Ma
nages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins
/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 15:12:58.481479    9071 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 15:12:58.481682    9071 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1226 15:12:58.503339    9071 out.go:177] * Using Docker Desktop driver with root privileges
	I1226 15:12:58.524103    9071 cni.go:84] Creating CNI manager for ""
	I1226 15:12:58.524138    9071 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 15:12:58.524157    9071 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1226 15:12:58.524170    9071 start_flags.go:323] config:
	{Name:force-systemd-env-026000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-026000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 15:12:58.546041    9071 out.go:177] * Starting control plane node force-systemd-env-026000 in cluster force-systemd-env-026000
	I1226 15:12:58.588124    9071 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 15:12:58.609978    9071 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 15:12:58.631195    9071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:12:58.631248    9071 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 15:12:58.631270    9071 cache.go:56] Caching tarball of preloaded images
	I1226 15:12:58.631268    9071 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 15:12:58.631470    9071 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 15:12:58.631488    9071 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 15:12:58.631655    9071 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/force-systemd-env-026000/config.json ...
	I1226 15:12:58.632473    9071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/force-systemd-env-026000/config.json: {Name:mk315bfd4747657103f1cbe9ccae617424a83a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 15:12:58.687598    9071 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 15:12:58.687618    9071 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 15:12:58.687838    9071 cache.go:194] Successfully downloaded all kic artifacts
	I1226 15:12:58.687886    9071 start.go:365] acquiring machines lock for force-systemd-env-026000: {Name:mk2d001d23859e05fc2e941b9831d3ba77d3ca0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:12:58.688044    9071 start.go:369] acquired machines lock for "force-systemd-env-026000" in 143.087µs
	I1226 15:12:58.688072    9071 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-026000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-026000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 15:12:58.688124    9071 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:12:58.709874    9071 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:12:58.710231    9071 start.go:159] libmachine.API.Create for "force-systemd-env-026000" (driver="docker")
	I1226 15:12:58.710283    9071 client.go:168] LocalClient.Create starting
	I1226 15:12:58.710458    9071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:12:58.710544    9071 main.go:141] libmachine: Decoding PEM data...
	I1226 15:12:58.710573    9071 main.go:141] libmachine: Parsing certificate...
	I1226 15:12:58.710681    9071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:12:58.710750    9071 main.go:141] libmachine: Decoding PEM data...
	I1226 15:12:58.710777    9071 main.go:141] libmachine: Parsing certificate...
	I1226 15:12:58.711623    9071 cli_runner.go:164] Run: docker network inspect force-systemd-env-026000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:12:58.766297    9071 cli_runner.go:211] docker network inspect force-systemd-env-026000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:12:58.766420    9071 network_create.go:281] running [docker network inspect force-systemd-env-026000] to gather additional debugging logs...
	I1226 15:12:58.766434    9071 cli_runner.go:164] Run: docker network inspect force-systemd-env-026000
	W1226 15:12:58.819228    9071 cli_runner.go:211] docker network inspect force-systemd-env-026000 returned with exit code 1
	I1226 15:12:58.819261    9071 network_create.go:284] error running [docker network inspect force-systemd-env-026000]: docker network inspect force-systemd-env-026000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-026000 not found
	I1226 15:12:58.819278    9071 network_create.go:286] output of [docker network inspect force-systemd-env-026000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-026000 not found
	
	** /stderr **
	I1226 15:12:58.819421    9071 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:12:58.872926    9071 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:12:58.874408    9071 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:12:58.876111    9071 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:12:58.876486    9071 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021eb840}
	I1226 15:12:58.876505    9071 network_create.go:124] attempt to create docker network force-systemd-env-026000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1226 15:12:58.876566    9071 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-026000 force-systemd-env-026000
	I1226 15:12:58.967040    9071 network_create.go:108] docker network force-systemd-env-026000 192.168.76.0/24 created
	I1226 15:12:58.967081    9071 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-026000" container
	I1226 15:12:58.967193    9071 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:12:59.022110    9071 cli_runner.go:164] Run: docker volume create force-systemd-env-026000 --label name.minikube.sigs.k8s.io=force-systemd-env-026000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:12:59.078355    9071 oci.go:103] Successfully created a docker volume force-systemd-env-026000
	I1226 15:12:59.078519    9071 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-026000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-026000 --entrypoint /usr/bin/test -v force-systemd-env-026000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:12:59.482947    9071 oci.go:107] Successfully prepared a docker volume force-systemd-env-026000
	I1226 15:12:59.483008    9071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:12:59.483023    9071 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:12:59.483136    9071 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-026000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:18:58.709403    9071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:18:58.709531    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:18:58.763312    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:18:58.763453    9071 retry.go:31] will retry after 173.346203ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:18:58.937765    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:18:58.991414    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:18:58.991514    9071 retry.go:31] will retry after 531.966838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:18:59.523979    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:18:59.578115    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:18:59.578211    9071 retry.go:31] will retry after 842.195152ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:00.420815    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:19:00.475042    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	W1226 15:19:00.475217    9071 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	W1226 15:19:00.475244    9071 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:00.475340    9071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:19:00.475407    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:19:00.528439    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:19:00.528561    9071 retry.go:31] will retry after 333.851941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:00.862701    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:19:00.916037    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:19:00.916161    9071 retry.go:31] will retry after 442.125064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:01.359465    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:19:01.413655    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:19:01.413754    9071 retry.go:31] will retry after 725.854118ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:02.140059    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:19:02.193762    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	W1226 15:19:02.193861    9071 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	W1226 15:19:02.193881    9071 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:02.193896    9071 start.go:128] duration metric: createHost completed in 6m3.50895453s
	I1226 15:19:02.193904    9071 start.go:83] releasing machines lock for "force-systemd-env-026000", held for 6m3.509062167s
	W1226 15:19:02.193916    9071 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1226 15:19:02.194367    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:02.246934    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:02.246993    9071 delete.go:82] Unable to get host status for force-systemd-env-026000, assuming it has already been deleted: state: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	W1226 15:19:02.247098    9071 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1226 15:19:02.247107    9071 start.go:709] Will try again in 5 seconds ...
	I1226 15:19:07.248092    9071 start.go:365] acquiring machines lock for force-systemd-env-026000: {Name:mk2d001d23859e05fc2e941b9831d3ba77d3ca0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 15:19:07.248338    9071 start.go:369] acquired machines lock for "force-systemd-env-026000" in 180.348µs
	I1226 15:19:07.248373    9071 start.go:96] Skipping create...Using existing machine configuration
	I1226 15:19:07.248388    9071 fix.go:54] fixHost starting: 
	I1226 15:19:07.248860    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:07.303263    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:07.303306    9071 fix.go:102] recreateIfNeeded on force-systemd-env-026000: state= err=unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:07.303326    9071 fix.go:107] machineExists: false. err=machine does not exist
	I1226 15:19:07.324898    9071 out.go:177] * docker "force-systemd-env-026000" container is missing, will recreate.
	I1226 15:19:07.345712    9071 delete.go:124] DEMOLISHING force-systemd-env-026000 ...
	I1226 15:19:07.345873    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:07.401268    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	W1226 15:19:07.401326    9071 stop.go:75] unable to get state: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:07.401349    9071 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:07.401763    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:07.454183    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:07.454265    9071 delete.go:82] Unable to get host status for force-systemd-env-026000, assuming it has already been deleted: state: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:07.454384    9071 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-026000
	W1226 15:19:07.506723    9071 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-026000 returned with exit code 1
	I1226 15:19:07.506761    9071 kic.go:371] could not find the container force-systemd-env-026000 to remove it. will try anyways
	I1226 15:19:07.506837    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:07.559478    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	W1226 15:19:07.559523    9071 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:07.559628    9071 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-026000 /bin/bash -c "sudo init 0"
	W1226 15:19:07.611323    9071 cli_runner.go:211] docker exec --privileged -t force-systemd-env-026000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 15:19:07.611353    9071 oci.go:650] error shutdown force-systemd-env-026000: docker exec --privileged -t force-systemd-env-026000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:08.611583    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:08.665984    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:08.666036    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:08.666052    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:08.666074    9071 retry.go:31] will retry after 458.291627ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:09.125502    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:09.178297    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:09.178355    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:09.178368    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:09.178396    9071 retry.go:31] will retry after 798.626296ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:09.977413    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:10.031811    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:10.031855    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:10.031869    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:10.031894    9071 retry.go:31] will retry after 1.357220984s: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:11.390488    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:11.444804    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:11.444858    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:11.444870    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:11.444897    9071 retry.go:31] will retry after 2.101772491s: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:13.546997    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:13.601092    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:13.601150    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:13.601160    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:13.601188    9071 retry.go:31] will retry after 3.41160334s: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:17.013861    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:17.066048    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:17.066115    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:17.066127    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:17.066149    9071 retry.go:31] will retry after 3.104911371s: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:20.172886    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:20.227796    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:20.227845    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:20.227859    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:20.227886    9071 retry.go:31] will retry after 5.461056159s: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:25.689304    9071 cli_runner.go:164] Run: docker container inspect force-systemd-env-026000 --format={{.State.Status}}
	W1226 15:19:25.742649    9071 cli_runner.go:211] docker container inspect force-systemd-env-026000 --format={{.State.Status}} returned with exit code 1
	I1226 15:19:25.742714    9071 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:19:25.742724    9071 oci.go:664] temporary error: container force-systemd-env-026000 status is  but expect it to be exited
	I1226 15:19:25.742761    9071 oci.go:88] couldn't shut down force-systemd-env-026000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	 
	I1226 15:19:25.742851    9071 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-026000
	I1226 15:19:25.796351    9071 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-026000
	W1226 15:19:25.847936    9071 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-026000 returned with exit code 1
	I1226 15:19:25.848046    9071 cli_runner.go:164] Run: docker network inspect force-systemd-env-026000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:19:25.902625    9071 cli_runner.go:164] Run: docker network rm force-systemd-env-026000
	I1226 15:19:26.008186    9071 fix.go:114] Sleeping 1 second for extra luck!
	I1226 15:19:27.009298    9071 start.go:125] createHost starting for "" (driver="docker")
	I1226 15:19:27.032475    9071 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1226 15:19:27.032627    9071 start.go:159] libmachine.API.Create for "force-systemd-env-026000" (driver="docker")
	I1226 15:19:27.032668    9071 client.go:168] LocalClient.Create starting
	I1226 15:19:27.032904    9071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 15:19:27.032996    9071 main.go:141] libmachine: Decoding PEM data...
	I1226 15:19:27.033022    9071 main.go:141] libmachine: Parsing certificate...
	I1226 15:19:27.033102    9071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 15:19:27.033169    9071 main.go:141] libmachine: Decoding PEM data...
	I1226 15:19:27.033190    9071 main.go:141] libmachine: Parsing certificate...
	I1226 15:19:27.054315    9071 cli_runner.go:164] Run: docker network inspect force-systemd-env-026000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 15:19:27.108768    9071 cli_runner.go:211] docker network inspect force-systemd-env-026000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 15:19:27.108862    9071 network_create.go:281] running [docker network inspect force-systemd-env-026000] to gather additional debugging logs...
	I1226 15:19:27.108879    9071 cli_runner.go:164] Run: docker network inspect force-systemd-env-026000
	W1226 15:19:27.161884    9071 cli_runner.go:211] docker network inspect force-systemd-env-026000 returned with exit code 1
	I1226 15:19:27.161916    9071 network_create.go:284] error running [docker network inspect force-systemd-env-026000]: docker network inspect force-systemd-env-026000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-026000 not found
	I1226 15:19:27.161927    9071 network_create.go:286] output of [docker network inspect force-systemd-env-026000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-026000 not found
	
	** /stderr **
	I1226 15:19:27.162062    9071 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 15:19:27.216734    9071 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:19:27.218277    9071 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:19:27.219779    9071 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:19:27.221137    9071 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:19:27.222640    9071 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 15:19:27.223083    9071 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021eb9c0}
	I1226 15:19:27.223096    9071 network_create.go:124] attempt to create docker network force-systemd-env-026000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1226 15:19:27.223170    9071 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-026000 force-systemd-env-026000
	I1226 15:19:27.313571    9071 network_create.go:108] docker network force-systemd-env-026000 192.168.94.0/24 created
	I1226 15:19:27.313701    9071 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-026000" container
	I1226 15:19:27.313830    9071 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 15:19:27.369036    9071 cli_runner.go:164] Run: docker volume create force-systemd-env-026000 --label name.minikube.sigs.k8s.io=force-systemd-env-026000 --label created_by.minikube.sigs.k8s.io=true
	I1226 15:19:27.421821    9071 oci.go:103] Successfully created a docker volume force-systemd-env-026000
	I1226 15:19:27.421946    9071 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-026000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-026000 --entrypoint /usr/bin/test -v force-systemd-env-026000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 15:19:27.757479    9071 oci.go:107] Successfully prepared a docker volume force-systemd-env-026000
	I1226 15:19:27.757520    9071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 15:19:27.757546    9071 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 15:19:27.757634    9071 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-026000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 15:25:27.089153    9071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:25:27.089276    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:27.142482    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:27.142602    9071 retry.go:31] will retry after 292.195365ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:27.435566    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:27.489425    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:27.489548    9071 retry.go:31] will retry after 208.569238ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:27.698590    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:27.753150    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:27.753248    9071 retry.go:31] will retry after 693.940831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:28.449490    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:28.502374    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	W1226 15:25:28.502506    9071 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	W1226 15:25:28.502533    9071 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:28.502592    9071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:25:28.502655    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:28.555955    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:28.556066    9071 retry.go:31] will retry after 219.582156ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:28.775932    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:28.830392    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:28.830507    9071 retry.go:31] will retry after 361.75715ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:29.194609    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:29.249096    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:29.249191    9071 retry.go:31] will retry after 677.0475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:29.926639    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:29.981244    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	W1226 15:25:29.981349    9071 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	W1226 15:25:29.981363    9071 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:29.981383    9071 start.go:128] duration metric: createHost completed in 6m2.916270326s
	I1226 15:25:29.981451    9071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 15:25:29.981520    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:30.034060    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:30.034165    9071 retry.go:31] will retry after 196.66046ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:30.231354    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:30.286154    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:30.286262    9071 retry.go:31] will retry after 472.990627ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:30.760223    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:30.814248    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:30.814346    9071 retry.go:31] will retry after 416.975455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:31.231751    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:31.285579    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	W1226 15:25:31.285685    9071 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	W1226 15:25:31.285702    9071 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:31.285764    9071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 15:25:31.285831    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:31.338721    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:31.338814    9071 retry.go:31] will retry after 308.991264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:31.648095    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:31.700752    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:31.700844    9071 retry.go:31] will retry after 346.023777ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:32.047309    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:32.102100    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	I1226 15:25:32.102188    9071 retry.go:31] will retry after 476.102644ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:32.580573    9071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000
	W1226 15:25:32.634483    9071 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000 returned with exit code 1
	W1226 15:25:32.634588    9071 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	W1226 15:25:32.634605    9071 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-026000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-026000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	I1226 15:25:32.634620    9071 fix.go:56] fixHost completed within 6m25.330664134s
	I1226 15:25:32.634627    9071 start.go:83] releasing machines lock for "force-systemd-env-026000", held for 6m25.330706292s
	W1226 15:25:32.634741    9071 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-026000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-026000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 15:25:32.677040    9071 out.go:177] 
	W1226 15:25:32.699160    9071 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 15:25:32.699198    9071 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1226 15:25:32.699232    9071 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1226 15:25:32.721058    9071 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-026000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-026000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-026000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (208.708728ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-026000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-12-26 15:25:33.004215 -0800 PST m=+6097.256862834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-026000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-026000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-026000",
	        "Id": "3f40eb1ef446e1a3c7aebd5fbda708523951527651af10a3274c4bb07cc7693b",
	        "Created": "2023-12-26T23:19:27.272282374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-026000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-026000 -n force-systemd-env-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-026000 -n force-systemd-env-026000: exit status 7 (118.371037ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 15:25:33.177482    9655 status.go:249] status error: host: state: unknown state "force-systemd-env-026000": docker container inspect force-systemd-env-026000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-026000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-026000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-026000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-026000
--- FAIL: TestForceSystemdEnv (756.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (263.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-212000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1226 13:58:46.226870    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:59:13.919929    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:59:36.198950    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.205431    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.217637    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.238277    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.279278    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.359724    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.521290    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:36.843533    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:37.484791    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:38.765394    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:41.326848    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:46.448376    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 13:59:56.688515    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:00:17.170521    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:00:58.131865    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-212000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m23.228260093s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-212000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-212000 in cluster ingress-addon-legacy-212000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 13:57:02.907704    4606 out.go:296] Setting OutFile to fd 1 ...
	I1226 13:57:02.907898    4606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:57:02.907902    4606 out.go:309] Setting ErrFile to fd 2...
	I1226 13:57:02.907906    4606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:57:02.908124    4606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 13:57:02.909547    4606 out.go:303] Setting JSON to false
	I1226 13:57:02.931657    4606 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1592,"bootTime":1703626230,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 13:57:02.931823    4606 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 13:57:02.954610    4606 out.go:177] * [ingress-addon-legacy-212000] minikube v1.32.0 on Darwin 14.2.1
	I1226 13:57:03.019391    4606 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 13:57:02.997624    4606 notify.go:220] Checking for updates...
	I1226 13:57:03.063466    4606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 13:57:03.106433    4606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 13:57:03.163931    4606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 13:57:03.206285    4606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 13:57:03.228194    4606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 13:57:03.249678    4606 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 13:57:03.306166    4606 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 13:57:03.306317    4606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:57:03.406086    4606 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:61 SystemTime:2023-12-26 21:57:03.396809615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:57:03.448241    4606 out.go:177] * Using the docker driver based on user configuration
	I1226 13:57:03.469396    4606 start.go:298] selected driver: docker
	I1226 13:57:03.469428    4606 start.go:902] validating driver "docker" against <nil>
	I1226 13:57:03.469442    4606 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 13:57:03.473861    4606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:57:03.573646    4606 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:61 SystemTime:2023-12-26 21:57:03.563742517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:57:03.573804    4606 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 13:57:03.573984    4606 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 13:57:03.594859    4606 out.go:177] * Using Docker Desktop driver with root privileges
	I1226 13:57:03.616778    4606 cni.go:84] Creating CNI manager for ""
	I1226 13:57:03.616822    4606 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1226 13:57:03.616854    4606 start_flags.go:323] config:
	{Name:ingress-addon-legacy-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-212000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:57:03.660912    4606 out.go:177] * Starting control plane node ingress-addon-legacy-212000 in cluster ingress-addon-legacy-212000
	I1226 13:57:03.682556    4606 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 13:57:03.725846    4606 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 13:57:03.746791    4606 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1226 13:57:03.746901    4606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 13:57:03.800558    4606 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 13:57:03.800586    4606 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 13:57:03.801007    4606 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1226 13:57:03.801020    4606 cache.go:56] Caching tarball of preloaded images
	I1226 13:57:03.801210    4606 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1226 13:57:03.822851    4606 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1226 13:57:03.864752    4606 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:57:03.948053    4606 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1226 13:57:10.304141    4606 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:57:10.304359    4606 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:57:10.932324    4606 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1226 13:57:10.932569    4606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/config.json ...
	I1226 13:57:10.932594    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/config.json: {Name:mk22d943146ac289402f868f4e5c16404c688e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:10.932891    4606 cache.go:194] Successfully downloaded all kic artifacts
	I1226 13:57:10.932920    4606 start.go:365] acquiring machines lock for ingress-addon-legacy-212000: {Name:mk28134f41ff1fe832a9a6056c4f616b43d6267d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 13:57:10.933051    4606 start.go:369] acquired machines lock for "ingress-addon-legacy-212000" in 121.711µs
	I1226 13:57:10.933071    4606 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-212000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 13:57:10.933117    4606 start.go:125] createHost starting for "" (driver="docker")
	I1226 13:57:10.955521    4606 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1226 13:57:10.955833    4606 start.go:159] libmachine.API.Create for "ingress-addon-legacy-212000" (driver="docker")
	I1226 13:57:10.955884    4606 client.go:168] LocalClient.Create starting
	I1226 13:57:10.956057    4606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 13:57:10.956147    4606 main.go:141] libmachine: Decoding PEM data...
	I1226 13:57:10.956180    4606 main.go:141] libmachine: Parsing certificate...
	I1226 13:57:10.956271    4606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 13:57:10.956342    4606 main.go:141] libmachine: Decoding PEM data...
	I1226 13:57:10.956359    4606 main.go:141] libmachine: Parsing certificate...
	I1226 13:57:10.965943    4606 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 13:57:11.019953    4606 cli_runner.go:211] docker network inspect ingress-addon-legacy-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 13:57:11.020082    4606 network_create.go:281] running [docker network inspect ingress-addon-legacy-212000] to gather additional debugging logs...
	I1226 13:57:11.020105    4606 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-212000
	W1226 13:57:11.070664    4606 cli_runner.go:211] docker network inspect ingress-addon-legacy-212000 returned with exit code 1
	I1226 13:57:11.070705    4606 network_create.go:284] error running [docker network inspect ingress-addon-legacy-212000]: docker network inspect ingress-addon-legacy-212000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-212000 not found
	I1226 13:57:11.070723    4606 network_create.go:286] output of [docker network inspect ingress-addon-legacy-212000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-212000 not found
	
	** /stderr **
	I1226 13:57:11.070880    4606 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 13:57:11.122701    4606 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005f7640}
	I1226 13:57:11.122737    4606 network_create.go:124] attempt to create docker network ingress-addon-legacy-212000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1226 13:57:11.122822    4606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-212000 ingress-addon-legacy-212000
	I1226 13:57:11.207824    4606 network_create.go:108] docker network ingress-addon-legacy-212000 192.168.49.0/24 created
	I1226 13:57:11.207874    4606 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-212000" container
	I1226 13:57:11.207986    4606 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 13:57:11.258210    4606 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-212000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-212000 --label created_by.minikube.sigs.k8s.io=true
	I1226 13:57:11.309451    4606 oci.go:103] Successfully created a docker volume ingress-addon-legacy-212000
	I1226 13:57:11.309564    4606 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-212000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-212000 --entrypoint /usr/bin/test -v ingress-addon-legacy-212000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 13:57:11.699734    4606 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-212000
	I1226 13:57:11.699773    4606 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1226 13:57:11.699787    4606 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 13:57:11.699918    4606 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-212000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 13:57:14.078568    4606 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-212000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (2.378587163s)
	I1226 13:57:14.078594    4606 kic.go:203] duration metric: took 2.378821 seconds to extract preloaded images to volume
	I1226 13:57:14.078720    4606 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 13:57:14.178688    4606 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-212000 --name ingress-addon-legacy-212000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-212000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-212000 --network ingress-addon-legacy-212000 --ip 192.168.49.2 --volume ingress-addon-legacy-212000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 13:57:14.452173    4606 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Running}}
	I1226 13:57:14.507771    4606 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 13:57:14.564282    4606 cli_runner.go:164] Run: docker exec ingress-addon-legacy-212000 stat /var/lib/dpkg/alternatives/iptables
	I1226 13:57:14.676947    4606 oci.go:144] the created container "ingress-addon-legacy-212000" has a running status.
	I1226 13:57:14.676996    4606 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa...
	I1226 13:57:14.957030    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 13:57:14.957091    4606 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 13:57:15.019011    4606 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 13:57:15.073571    4606 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 13:57:15.073592    4606 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-212000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 13:57:15.170390    4606 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 13:57:15.222437    4606 machine.go:88] provisioning docker machine ...
	I1226 13:57:15.222480    4606 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-212000"
	I1226 13:57:15.222586    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:15.273696    4606 main.go:141] libmachine: Using SSH client type: native
	I1226 13:57:15.274042    4606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 50500 <nil> <nil>}
	I1226 13:57:15.274056    4606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-212000 && echo "ingress-addon-legacy-212000" | sudo tee /etc/hostname
	I1226 13:57:15.402819    4606 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-212000
	
	I1226 13:57:15.402913    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:15.453898    4606 main.go:141] libmachine: Using SSH client type: native
	I1226 13:57:15.454199    4606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 50500 <nil> <nil>}
	I1226 13:57:15.454216    4606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-212000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-212000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-212000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 13:57:15.573025    4606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 13:57:15.573049    4606 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17857-1142/.minikube CaCertPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17857-1142/.minikube}
	I1226 13:57:15.573076    4606 ubuntu.go:177] setting up certificates
	I1226 13:57:15.573084    4606 provision.go:83] configureAuth start
	I1226 13:57:15.573159    4606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-212000
	I1226 13:57:15.669462    4606 provision.go:138] copyHostCerts
	I1226 13:57:15.669508    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.pem
	I1226 13:57:15.669553    4606 exec_runner.go:144] found /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.pem, removing ...
	I1226 13:57:15.669561    4606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.pem
	I1226 13:57:15.669723    4606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.pem (1078 bytes)
	I1226 13:57:15.669903    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cert.pem
	I1226 13:57:15.669937    4606 exec_runner.go:144] found /Users/jenkins/minikube-integration/17857-1142/.minikube/cert.pem, removing ...
	I1226 13:57:15.669942    4606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17857-1142/.minikube/cert.pem
	I1226 13:57:15.670015    4606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17857-1142/.minikube/cert.pem (1123 bytes)
	I1226 13:57:15.670152    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17857-1142/.minikube/key.pem
	I1226 13:57:15.670191    4606 exec_runner.go:144] found /Users/jenkins/minikube-integration/17857-1142/.minikube/key.pem, removing ...
	I1226 13:57:15.670196    4606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17857-1142/.minikube/key.pem
	I1226 13:57:15.670266    4606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17857-1142/.minikube/key.pem (1679 bytes)
	I1226 13:57:15.670402    4606 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-212000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-212000]
	I1226 13:57:16.033021    4606 provision.go:172] copyRemoteCerts
	I1226 13:57:16.033081    4606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 13:57:16.033136    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:16.083954    4606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 13:57:16.171580    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 13:57:16.171669    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1226 13:57:16.191668    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 13:57:16.191744    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1226 13:57:16.212021    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 13:57:16.212091    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 13:57:16.232220    4606 provision.go:86] duration metric: configureAuth took 659.123403ms
	I1226 13:57:16.232238    4606 ubuntu.go:193] setting minikube options for container-runtime
	I1226 13:57:16.232392    4606 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 13:57:16.232464    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:16.282772    4606 main.go:141] libmachine: Using SSH client type: native
	I1226 13:57:16.283067    4606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 50500 <nil> <nil>}
	I1226 13:57:16.283078    4606 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 13:57:16.402405    4606 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1226 13:57:16.402422    4606 ubuntu.go:71] root file system type: overlay
	I1226 13:57:16.402536    4606 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 13:57:16.402631    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:16.453811    4606 main.go:141] libmachine: Using SSH client type: native
	I1226 13:57:16.454107    4606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 50500 <nil> <nil>}
	I1226 13:57:16.454159    4606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 13:57:16.582392    4606 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 13:57:16.582486    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:16.633249    4606 main.go:141] libmachine: Using SSH client type: native
	I1226 13:57:16.633565    4606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 127.0.0.1 50500 <nil> <nil>}
	I1226 13:57:16.633578    4606 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 13:57:17.192004    4606 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-26 21:57:16.580055962 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1226 13:57:17.192029    4606 machine.go:91] provisioned docker machine in 1.969581285s
	I1226 13:57:17.192045    4606 client.go:171] LocalClient.Create took 6.236193413s
	I1226 13:57:17.192062    4606 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-212000" took 6.236273162s
	I1226 13:57:17.192069    4606 start.go:300] post-start starting for "ingress-addon-legacy-212000" (driver="docker")
	I1226 13:57:17.192078    4606 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 13:57:17.192143    4606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 13:57:17.192198    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:17.245654    4606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 13:57:17.333320    4606 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 13:57:17.337228    4606 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 13:57:17.337252    4606 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 13:57:17.337266    4606 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 13:57:17.337272    4606 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 13:57:17.337282    4606 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17857-1142/.minikube/addons for local assets ...
	I1226 13:57:17.337375    4606 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17857-1142/.minikube/files for local assets ...
	I1226 13:57:17.337565    4606 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/16122.pem -> 16122.pem in /etc/ssl/certs
	I1226 13:57:17.337572    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/16122.pem -> /etc/ssl/certs/16122.pem
	I1226 13:57:17.337773    4606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 13:57:17.345809    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/16122.pem --> /etc/ssl/certs/16122.pem (1708 bytes)
	I1226 13:57:17.365828    4606 start.go:303] post-start completed in 173.750694ms
	I1226 13:57:17.366399    4606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-212000
	I1226 13:57:17.418173    4606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/config.json ...
	I1226 13:57:17.418617    4606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 13:57:17.418688    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:17.469530    4606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 13:57:17.553826    4606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 13:57:17.558661    4606 start.go:128] duration metric: createHost completed in 6.625573873s
	I1226 13:57:17.558676    4606 start.go:83] releasing machines lock for "ingress-addon-legacy-212000", held for 6.625660387s
	I1226 13:57:17.558755    4606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-212000
	I1226 13:57:17.609491    4606 ssh_runner.go:195] Run: cat /version.json
	I1226 13:57:17.609516    4606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 13:57:17.609559    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:17.609590    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:17.664444    4606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 13:57:17.664455    4606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 13:57:17.856305    4606 ssh_runner.go:195] Run: systemctl --version
	I1226 13:57:17.861183    4606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 13:57:17.865974    4606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1226 13:57:17.887674    4606 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1226 13:57:17.887757    4606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1226 13:57:17.903080    4606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1226 13:57:17.917961    4606 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 13:57:17.932229    4606 start.go:475] detecting cgroup driver to use...
	I1226 13:57:17.932246    4606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 13:57:17.932358    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 13:57:17.946713    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1226 13:57:17.956039    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 13:57:17.965283    4606 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 13:57:17.965346    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 13:57:17.974559    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 13:57:17.983669    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 13:57:17.992839    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 13:57:18.001900    4606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 13:57:18.010496    4606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 13:57:18.019665    4606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 13:57:18.027774    4606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 13:57:18.035680    4606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 13:57:18.089735    4606 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 13:57:18.166349    4606 start.go:475] detecting cgroup driver to use...
	I1226 13:57:18.166375    4606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 13:57:18.166450    4606 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 13:57:18.191607    4606 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1226 13:57:18.191682    4606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 13:57:18.202891    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 13:57:18.219675    4606 ssh_runner.go:195] Run: which cri-dockerd
	I1226 13:57:18.224706    4606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 13:57:18.233896    4606 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 13:57:18.251219    4606 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 13:57:18.317295    4606 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 13:57:18.396748    4606 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 13:57:18.396844    4606 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 13:57:18.412891    4606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 13:57:18.496401    4606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 13:57:18.727739    4606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 13:57:18.749913    4606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 13:57:18.798028    4606 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1226 13:57:18.798155    4606 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-212000 dig +short host.docker.internal
	I1226 13:57:18.916423    4606 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1226 13:57:18.916523    4606 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1226 13:57:18.920986    4606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 13:57:18.931683    4606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 13:57:18.983359    4606 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1226 13:57:18.983437    4606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 13:57:19.003545    4606 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1226 13:57:19.003561    4606 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1226 13:57:19.003616    4606 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1226 13:57:19.011748    4606 ssh_runner.go:195] Run: which lz4
	I1226 13:57:19.015627    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1226 13:57:19.015732    4606 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1226 13:57:19.019570    4606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 13:57:19.019592    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1226 13:57:24.732744    4606 docker.go:635] Took 5.717089 seconds to copy over tarball
	I1226 13:57:24.732837    4606 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1226 13:57:26.403410    4606 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.670568152s)
	I1226 13:57:26.403426    4606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1226 13:57:26.447137    4606 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1226 13:57:26.455879    4606 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1226 13:57:26.470967    4606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 13:57:26.525945    4606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 13:57:27.554884    4606 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.02892653s)
	I1226 13:57:27.554993    4606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 13:57:27.574741    4606 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1226 13:57:27.574758    4606 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1226 13:57:27.574769    4606 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1226 13:57:27.580798    4606 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 13:57:27.580973    4606 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 13:57:27.581974    4606 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 13:57:27.583731    4606 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1226 13:57:27.583885    4606 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1226 13:57:27.583928    4606 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 13:57:27.583956    4606 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1226 13:57:27.584301    4606 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 13:57:27.587914    4606 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 13:57:27.588265    4606 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 13:57:27.589608    4606 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 13:57:27.590725    4606 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1226 13:57:27.591278    4606 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1226 13:57:27.591341    4606 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 13:57:27.592766    4606 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1226 13:57:27.593413    4606 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 13:57:29.391790    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1226 13:57:29.410292    4606 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1226 13:57:29.410330    4606 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 13:57:29.410391    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1226 13:57:29.429449    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1226 13:57:29.449224    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1226 13:57:29.466526    4606 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1226 13:57:29.466550    4606 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 13:57:29.466612    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1226 13:57:29.485387    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 13:57:29.485610    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1226 13:57:29.487521    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1226 13:57:29.501601    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1226 13:57:29.503178    4606 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1226 13:57:29.503209    4606 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 13:57:29.503286    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 13:57:29.505135    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1226 13:57:29.507876    4606 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1226 13:57:29.507906    4606 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1226 13:57:29.507984    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1226 13:57:29.517005    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1226 13:57:29.563352    4606 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1226 13:57:29.563404    4606 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 13:57:29.563532    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1226 13:57:29.567915    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1226 13:57:29.569725    4606 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1226 13:57:29.569760    4606 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1226 13:57:29.569798    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1226 13:57:29.569846    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1226 13:57:29.577062    4606 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1226 13:57:29.577100    4606 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1226 13:57:29.577186    4606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1226 13:57:29.586503    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1226 13:57:29.591767    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1226 13:57:29.597776    4606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1226 13:57:29.999231    4606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 13:57:30.018593    4606 cache_images.go:92] LoadImages completed in 2.443825866s
	W1226 13:57:30.018645    4606 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1226 13:57:30.018733    4606 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 13:57:30.067912    4606 cni.go:84] Creating CNI manager for ""
	I1226 13:57:30.067930    4606 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1226 13:57:30.067941    4606 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 13:57:30.067957    4606 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-212000 NodeName:ingress-addon-legacy-212000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1226 13:57:30.068041    4606 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-212000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 13:57:30.068086    4606 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-212000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-212000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 13:57:30.068148    4606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1226 13:57:30.076911    4606 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 13:57:30.076975    4606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 13:57:30.085091    4606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1226 13:57:30.100516    4606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1226 13:57:30.116002    4606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1226 13:57:30.131365    4606 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 13:57:30.135525    4606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 13:57:30.145672    4606 certs.go:56] Setting up /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000 for IP: 192.168.49.2
	I1226 13:57:30.145692    4606 certs.go:190] acquiring lock for shared ca certs: {Name:mka50056f4f913dbfc39d9ec6dce51b7903f470c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.145882    4606 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.key
	I1226 13:57:30.145950    4606 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17857-1142/.minikube/proxy-client-ca.key
	I1226 13:57:30.145996    4606 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/client.key
	I1226 13:57:30.146009    4606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/client.crt with IP's: []
	I1226 13:57:30.216088    4606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/client.crt ...
	I1226 13:57:30.216101    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/client.crt: {Name:mkdb89593cda27d340c2fc40a1cc08d197bab546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.216403    4606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/client.key ...
	I1226 13:57:30.216412    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/client.key: {Name:mk5500dd889f869ee19b4be531f65696e771272f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.216636    4606 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key.dd3b5fb2
	I1226 13:57:30.216655    4606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 13:57:30.373223    4606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt.dd3b5fb2 ...
	I1226 13:57:30.373234    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt.dd3b5fb2: {Name:mk9341822f5ae85d7bfe80adcbdf65e2d2422e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.373493    4606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key.dd3b5fb2 ...
	I1226 13:57:30.373502    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key.dd3b5fb2: {Name:mk24e5b8e5888d42d7ca37620dd788ed2eafb6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.373692    4606 certs.go:337] copying /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt
	I1226 13:57:30.373868    4606 certs.go:341] copying /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key
	I1226 13:57:30.374039    4606 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.key
	I1226 13:57:30.374052    4606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.crt with IP's: []
	I1226 13:57:30.536242    4606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.crt ...
	I1226 13:57:30.536257    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.crt: {Name:mkd69346a6f54560ee8de5643374cff16fdc56e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.536574    4606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.key ...
	I1226 13:57:30.536583    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.key: {Name:mk52b8c8e0cac965b70977c514743dc17a8e0dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:57:30.536828    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 13:57:30.536859    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 13:57:30.536880    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 13:57:30.536900    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 13:57:30.536919    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 13:57:30.536937    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 13:57:30.536956    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 13:57:30.536978    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 13:57:30.537090    4606 certs.go:437] found cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/1612.pem (1338 bytes)
	W1226 13:57:30.537143    4606 certs.go:433] ignoring /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/1612_empty.pem, impossibly tiny 0 bytes
	I1226 13:57:30.537153    4606 certs.go:437] found cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca-key.pem (1679 bytes)
	I1226 13:57:30.537186    4606 certs.go:437] found cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem (1078 bytes)
	I1226 13:57:30.537218    4606 certs.go:437] found cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem (1123 bytes)
	I1226 13:57:30.537250    4606 certs.go:437] found cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/certs/key.pem (1679 bytes)
	I1226 13:57:30.537323    4606 certs.go:437] found cert: /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/16122.pem (1708 bytes)
	I1226 13:57:30.537370    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/16122.pem -> /usr/share/ca-certificates/16122.pem
	I1226 13:57:30.537392    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 13:57:30.537409    4606 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/1612.pem -> /usr/share/ca-certificates/1612.pem
	I1226 13:57:30.537880    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 13:57:30.558580    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 13:57:30.579031    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 13:57:30.599504    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/ingress-addon-legacy-212000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 13:57:30.619650    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 13:57:30.639968    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1226 13:57:30.660257    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 13:57:30.680459    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 13:57:30.700950    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/ssl/certs/16122.pem --> /usr/share/ca-certificates/16122.pem (1708 bytes)
	I1226 13:57:30.721509    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 13:57:30.741769    4606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/1612.pem --> /usr/share/ca-certificates/1612.pem (1338 bytes)
	I1226 13:57:30.761891    4606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 13:57:30.777264    4606 ssh_runner.go:195] Run: openssl version
	I1226 13:57:30.782700    4606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 13:57:30.791686    4606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 13:57:30.795541    4606 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 13:57:30.795582    4606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 13:57:30.802161    4606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 13:57:30.811320    4606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1612.pem && ln -fs /usr/share/ca-certificates/1612.pem /etc/ssl/certs/1612.pem"
	I1226 13:57:30.820047    4606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1612.pem
	I1226 13:57:30.824289    4606 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:51 /usr/share/ca-certificates/1612.pem
	I1226 13:57:30.824336    4606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1612.pem
	I1226 13:57:30.830836    4606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1612.pem /etc/ssl/certs/51391683.0"
	I1226 13:57:30.839838    4606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16122.pem && ln -fs /usr/share/ca-certificates/16122.pem /etc/ssl/certs/16122.pem"
	I1226 13:57:30.848832    4606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16122.pem
	I1226 13:57:30.852898    4606 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:51 /usr/share/ca-certificates/16122.pem
	I1226 13:57:30.852943    4606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16122.pem
	I1226 13:57:30.859492    4606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16122.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 13:57:30.868242    4606 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 13:57:30.872160    4606 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 13:57:30.872205    4606 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-212000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:57:30.872308    4606 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 13:57:30.890411    4606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 13:57:30.898766    4606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 13:57:30.906983    4606 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 13:57:30.907037    4606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 13:57:30.915051    4606 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 13:57:30.915083    4606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 13:57:30.971651    4606 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1226 13:57:30.971697    4606 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 13:57:31.197522    4606 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 13:57:31.197603    4606 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 13:57:31.197683    4606 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 13:57:31.361969    4606 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 13:57:31.362578    4606 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 13:57:31.362626    4606 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 13:57:31.434699    4606 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 13:57:31.476831    4606 out.go:204]   - Generating certificates and keys ...
	I1226 13:57:31.476917    4606 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 13:57:31.476992    4606 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 13:57:31.483596    4606 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 13:57:31.678634    4606 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 13:57:31.927547    4606 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 13:57:32.180858    4606 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 13:57:32.293313    4606 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 13:57:32.293464    4606 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-212000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 13:57:32.348274    4606 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 13:57:32.348425    4606 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-212000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 13:57:32.625473    4606 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 13:57:32.703786    4606 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 13:57:32.988600    4606 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 13:57:32.988658    4606 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 13:57:33.218376    4606 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 13:57:33.490530    4606 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 13:57:33.583878    4606 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 13:57:33.788228    4606 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 13:57:33.789734    4606 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 13:57:33.811412    4606 out.go:204]   - Booting up control plane ...
	I1226 13:57:33.811584    4606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 13:57:33.811724    4606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 13:57:33.811847    4606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 13:57:33.811987    4606 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 13:57:33.812250    4606 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 13:58:13.797771    4606 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1226 13:58:13.798199    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 13:58:13.798347    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 13:58:18.799620    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 13:58:18.799858    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 13:58:28.800091    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 13:58:28.800242    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 13:58:48.801043    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 13:58:48.801202    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 13:59:28.803533    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 13:59:28.803875    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 13:59:28.803900    4606 kubeadm.go:322] 
	I1226 13:59:28.803966    4606 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1226 13:59:28.804063    4606 kubeadm.go:322] 		timed out waiting for the condition
	I1226 13:59:28.804087    4606 kubeadm.go:322] 
	I1226 13:59:28.804138    4606 kubeadm.go:322] 	This error is likely caused by:
	I1226 13:59:28.804198    4606 kubeadm.go:322] 		- The kubelet is not running
	I1226 13:59:28.804392    4606 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1226 13:59:28.804404    4606 kubeadm.go:322] 
	I1226 13:59:28.804537    4606 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1226 13:59:28.804581    4606 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1226 13:59:28.804628    4606 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1226 13:59:28.804640    4606 kubeadm.go:322] 
	I1226 13:59:28.804769    4606 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1226 13:59:28.804855    4606 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1226 13:59:28.804864    4606 kubeadm.go:322] 
	I1226 13:59:28.804960    4606 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1226 13:59:28.805010    4606 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1226 13:59:28.805105    4606 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1226 13:59:28.805148    4606 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1226 13:59:28.805160    4606 kubeadm.go:322] 
	I1226 13:59:28.806141    4606 kubeadm.go:322] W1226 21:57:30.971047    1703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1226 13:59:28.806324    4606 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1226 13:59:28.806467    4606 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1226 13:59:28.806607    4606 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1226 13:59:28.806704    4606 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 13:59:28.806819    4606 kubeadm.go:322] W1226 21:57:33.793703    1703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 13:59:28.806925    4606 kubeadm.go:322] W1226 21:57:33.794458    1703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 13:59:28.807011    4606 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1226 13:59:28.807085    4606 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1226 13:59:28.807187    4606 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-212000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-212000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:57:30.971047    1703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:57:33.793703    1703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:57:33.794458    1703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-212000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-212000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:57:30.971047    1703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:57:33.793703    1703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:57:33.794458    1703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1226 13:59:28.807223    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1226 13:59:29.213999    4606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 13:59:29.224483    4606 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 13:59:29.224553    4606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 13:59:29.232739    4606 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 13:59:29.232758    4606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 13:59:29.281357    4606 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1226 13:59:29.281407    4606 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 13:59:29.523755    4606 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 13:59:29.523838    4606 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 13:59:29.523920    4606 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 13:59:29.699306    4606 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 13:59:29.699990    4606 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 13:59:29.700027    4606 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 13:59:29.780171    4606 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 13:59:29.801634    4606 out.go:204]   - Generating certificates and keys ...
	I1226 13:59:29.801737    4606 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 13:59:29.801810    4606 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 13:59:29.801904    4606 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1226 13:59:29.801971    4606 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1226 13:59:29.802034    4606 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1226 13:59:29.802109    4606 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1226 13:59:29.802177    4606 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1226 13:59:29.802222    4606 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1226 13:59:29.802278    4606 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1226 13:59:29.802344    4606 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1226 13:59:29.802398    4606 kubeadm.go:322] [certs] Using the existing "sa" key
	I1226 13:59:29.802459    4606 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 13:59:29.890562    4606 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 13:59:30.438601    4606 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 13:59:30.505082    4606 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 13:59:30.551324    4606 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 13:59:30.552439    4606 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 13:59:30.574125    4606 out.go:204]   - Booting up control plane ...
	I1226 13:59:30.574265    4606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 13:59:30.574408    4606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 13:59:30.574524    4606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 13:59:30.574636    4606 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 13:59:30.574894    4606 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 14:00:10.559948    4606 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1226 14:00:10.560266    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 14:00:10.560411    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 14:00:15.562278    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 14:00:15.562497    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 14:00:25.563630    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 14:00:25.563840    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 14:00:45.564647    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 14:00:45.564791    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 14:01:25.537784    4606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1226 14:01:25.537994    4606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1226 14:01:25.538009    4606 kubeadm.go:322] 
	I1226 14:01:25.538072    4606 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1226 14:01:25.538114    4606 kubeadm.go:322] 		timed out waiting for the condition
	I1226 14:01:25.538123    4606 kubeadm.go:322] 
	I1226 14:01:25.538170    4606 kubeadm.go:322] 	This error is likely caused by:
	I1226 14:01:25.538205    4606 kubeadm.go:322] 		- The kubelet is not running
	I1226 14:01:25.538389    4606 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1226 14:01:25.538410    4606 kubeadm.go:322] 
	I1226 14:01:25.538586    4606 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1226 14:01:25.538629    4606 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1226 14:01:25.538672    4606 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1226 14:01:25.538686    4606 kubeadm.go:322] 
	I1226 14:01:25.538782    4606 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1226 14:01:25.538886    4606 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1226 14:01:25.538901    4606 kubeadm.go:322] 
	I1226 14:01:25.538997    4606 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1226 14:01:25.539054    4606 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1226 14:01:25.539144    4606 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1226 14:01:25.539179    4606 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1226 14:01:25.539187    4606 kubeadm.go:322] 
	I1226 14:01:25.540923    4606 kubeadm.go:322] W1226 21:59:29.280640    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1226 14:01:25.541068    4606 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1226 14:01:25.541128    4606 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1226 14:01:25.541252    4606 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1226 14:01:25.541377    4606 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 14:01:25.541480    4606 kubeadm.go:322] W1226 21:59:30.555976    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 14:01:25.541586    4606 kubeadm.go:322] W1226 21:59:30.556830    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 14:01:25.541650    4606 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1226 14:01:25.541714    4606 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1226 14:01:25.541744    4606 kubeadm.go:406] StartCluster complete in 3m54.699433052s
	I1226 14:01:25.541835    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1226 14:01:25.559863    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.559877    4606 logs.go:286] No container was found matching "kube-apiserver"
	I1226 14:01:25.559947    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1226 14:01:25.577645    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.577659    4606 logs.go:286] No container was found matching "etcd"
	I1226 14:01:25.577717    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1226 14:01:25.596771    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.596785    4606 logs.go:286] No container was found matching "coredns"
	I1226 14:01:25.596865    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1226 14:01:25.614968    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.614981    4606 logs.go:286] No container was found matching "kube-scheduler"
	I1226 14:01:25.615054    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1226 14:01:25.633017    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.633030    4606 logs.go:286] No container was found matching "kube-proxy"
	I1226 14:01:25.633101    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1226 14:01:25.651753    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.651766    4606 logs.go:286] No container was found matching "kube-controller-manager"
	I1226 14:01:25.651836    4606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1226 14:01:25.670405    4606 logs.go:284] 0 containers: []
	W1226 14:01:25.670419    4606 logs.go:286] No container was found matching "kindnet"
	I1226 14:01:25.670431    4606 logs.go:123] Gathering logs for kubelet ...
	I1226 14:01:25.670439    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1226 14:01:25.705603    4606 logs.go:123] Gathering logs for dmesg ...
	I1226 14:01:25.705618    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 14:01:25.717601    4606 logs.go:123] Gathering logs for describe nodes ...
	I1226 14:01:25.717619    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1226 14:01:25.769649    4606 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1226 14:01:25.769664    4606 logs.go:123] Gathering logs for Docker ...
	I1226 14:01:25.769672    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1226 14:01:25.785859    4606 logs.go:123] Gathering logs for container status ...
	I1226 14:01:25.785874    4606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1226 14:01:25.857488    4606 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:59:29.280640    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:59:30.555976    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:59:30.556830    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1226 14:01:25.857511    4606 out.go:239] * 
	* 
	W1226 14:01:25.857551    4606 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:59:29.280640    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:59:30.555976    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:59:30.556830    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:59:29.280640    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:59:30.555976    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:59:30.556830    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1226 14:01:25.857565    4606 out.go:239] * 
	* 
	W1226 14:01:25.858222    4606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 14:01:25.921893    4606 out.go:177] 
	W1226 14:01:25.965012    4606 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:59:29.280640    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:59:30.555976    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:59:30.556830    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1226 21:59:29.280640    4737 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1226 21:59:30.555976    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1226 21:59:30.556830    4737 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1226 14:01:25.965085    4606 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1226 14:01:25.965111    4606 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1226 14:01:25.986911    4606 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-212000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (263.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-212000 addons enable ingress --alsologtostderr -v=5
E1226 14:02:20.013504    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-212000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.334558951s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:01:26.143364    4862 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:01:26.143683    4862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:01:26.143689    4862 out.go:309] Setting ErrFile to fd 2...
	I1226 14:01:26.143693    4862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:01:26.143878    4862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:01:26.144252    4862 mustload.go:65] Loading cluster: ingress-addon-legacy-212000
	I1226 14:01:26.144567    4862 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 14:01:26.144583    4862 addons.go:600] checking whether the cluster is paused
	I1226 14:01:26.144665    4862 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 14:01:26.144682    4862 host.go:66] Checking if "ingress-addon-legacy-212000" exists ...
	I1226 14:01:26.145123    4862 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 14:01:26.195241    4862 ssh_runner.go:195] Run: systemctl --version
	I1226 14:01:26.195331    4862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 14:01:26.246960    4862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 14:01:26.330779    4862 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 14:01:26.371025    4862 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1226 14:01:26.391936    4862 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 14:01:26.391951    4862 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-212000"
	I1226 14:01:26.391959    4862 addons.go:237] Setting addon ingress=true in "ingress-addon-legacy-212000"
	I1226 14:01:26.391985    4862 host.go:66] Checking if "ingress-addon-legacy-212000" exists ...
	I1226 14:01:26.392282    4862 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 14:01:26.464590    4862 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1226 14:01:26.485734    4862 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1226 14:01:26.506724    4862 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1226 14:01:26.527524    4862 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1226 14:01:26.549031    4862 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 14:01:26.549057    4862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1226 14:01:26.549151    4862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 14:01:26.600704    4862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 14:01:26.695890    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:26.743410    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:26.743437    4862 retry.go:31] will retry after 164.030373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:26.908807    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:26.959703    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:26.959720    4862 retry.go:31] will retry after 398.496073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:27.358455    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:27.407715    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:27.407738    4862 retry.go:31] will retry after 301.792464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:27.709666    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:27.760417    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:27.760436    4862 retry.go:31] will retry after 688.735082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:28.449362    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:28.500185    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:28.500202    4862 retry.go:31] will retry after 1.831922127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:30.331853    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:30.379115    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:30.379133    4862 retry.go:31] will retry after 1.070489126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:31.449665    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:31.505899    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:31.505927    4862 retry.go:31] will retry after 3.80501616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:35.310998    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:35.361624    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:35.361642    4862 retry.go:31] will retry after 6.210641915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:41.570706    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:41.626002    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:41.626020    4862 retry.go:31] will retry after 4.487934665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:46.113749    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:46.164191    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:46.164209    4862 retry.go:31] will retry after 10.107122974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:56.270818    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:01:56.321402    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:01:56.321420    4862 retry.go:31] will retry after 18.889757153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:15.211530    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:02:15.266670    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:15.266693    4862 retry.go:31] will retry after 19.608327686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:34.874812    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:02:34.938422    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:34.938440    4862 retry.go:31] will retry after 20.309973163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:55.248190    4862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1226 14:02:55.297070    4862 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:55.297098    4862 addons.go:473] Verifying addon ingress=true in "ingress-addon-legacy-212000"
	I1226 14:02:55.318741    4862 out.go:177] * Verifying ingress addon...
	I1226 14:02:55.341528    4862 out.go:177] 
	W1226 14:02:55.363692    4862 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-212000" does not exist: client config: context "ingress-addon-legacy-212000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-212000" does not exist: client config: context "ingress-addon-legacy-212000" does not exist]
	W1226 14:02:55.363719    4862 out.go:239] * 
	* 
	W1226 14:02:55.366901    4862 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 14:02:55.388594    4862 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-212000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-212000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9",
	        "Created": "2023-12-26T21:57:14.231378159Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53221,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:57:14.44476773Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/hosts",
	        "LogPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9-json.log",
	        "Name": "/ingress-addon-legacy-212000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-212000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-212000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad-init/diff:/var/lib/docker/overlay2/9504b64c51e562d355bf6588d6f3a8de52c401736ff8b5d6bc5c642b8ed6a207/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-212000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-212000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-212000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-212000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-212000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6cfee8a401de57eac3c02136fab23478d742649b5313ba558204f08f2d4ceef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6cfee8a401d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-212000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "755360b2e3ae",
	                        "ingress-addon-legacy-212000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "f18f3794c7e5fddb27c76273f30f639278bf224b14dcb34feb82a422c43b9ebc",
	                    "EndpointID": "850cd54f9d707054f402b3965f5b70312c302c6c4b4b7e112f185020ad080ce7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-212000 -n ingress-addon-legacy-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-212000 -n ingress-addon-legacy-212000: exit status 6 (372.84059ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:02:55.828186    4910 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-212000" does not appear in /Users/jenkins/minikube-integration/17857-1142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-212000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (114.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-212000 addons enable ingress-dns --alsologtostderr -v=5
E1226 14:03:46.184946    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:04:36.157495    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-212000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m53.680781814s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:02:55.892966    4920 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:02:55.893351    4920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:02:55.893357    4920 out.go:309] Setting ErrFile to fd 2...
	I1226 14:02:55.893361    4920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:02:55.893535    4920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:02:55.893931    4920 mustload.go:65] Loading cluster: ingress-addon-legacy-212000
	I1226 14:02:55.894206    4920 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 14:02:55.894222    4920 addons.go:600] checking whether the cluster is paused
	I1226 14:02:55.894300    4920 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 14:02:55.894316    4920 host.go:66] Checking if "ingress-addon-legacy-212000" exists ...
	I1226 14:02:55.894691    4920 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 14:02:55.944763    4920 ssh_runner.go:195] Run: systemctl --version
	I1226 14:02:55.944849    4920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 14:02:55.994656    4920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 14:02:56.080267    4920 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 14:02:56.121505    4920 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1226 14:02:56.142261    4920 config.go:182] Loaded profile config "ingress-addon-legacy-212000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1226 14:02:56.142276    4920 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-212000"
	I1226 14:02:56.142286    4920 addons.go:237] Setting addon ingress-dns=true in "ingress-addon-legacy-212000"
	I1226 14:02:56.142325    4920 host.go:66] Checking if "ingress-addon-legacy-212000" exists ...
	I1226 14:02:56.142747    4920 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-212000 --format={{.State.Status}}
	I1226 14:02:56.214958    4920 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1226 14:02:56.236202    4920 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1226 14:02:56.257165    4920 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 14:02:56.257186    4920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1226 14:02:56.257272    4920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-212000
	I1226 14:02:56.310863    4920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/ingress-addon-legacy-212000/id_rsa Username:docker}
	I1226 14:02:56.405463    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:02:56.453190    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:56.453218    4920 retry.go:31] will retry after 339.943369ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:56.793916    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:02:56.844338    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:56.844360    4920 retry.go:31] will retry after 236.083313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:57.080993    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:02:57.142718    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:57.142739    4920 retry.go:31] will retry after 768.337257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:57.911348    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:02:57.964751    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:57.964770    4920 retry.go:31] will retry after 441.11233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:58.406194    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:02:58.464474    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:02:58.464499    4920 retry.go:31] will retry after 1.687221757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:00.152286    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:00.213051    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:00.213067    4920 retry.go:31] will retry after 2.708014266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:02.921232    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:02.973269    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:02.973287    4920 retry.go:31] will retry after 2.188871117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:05.162508    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:05.212320    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:05.212347    4920 retry.go:31] will retry after 5.813488108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:11.026654    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:11.088438    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:11.088457    4920 retry.go:31] will retry after 5.863686618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:16.954346    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:17.006642    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:17.006669    4920 retry.go:31] will retry after 11.32025415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:28.327450    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:28.383015    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:28.383031    4920 retry.go:31] will retry after 17.567102449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:45.951763    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:03:46.005142    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:03:46.005165    4920 retry.go:31] will retry after 27.604226493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:04:13.609430    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:04:13.666427    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:04:13.666451    4920 retry.go:31] will retry after 35.692410103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:04:49.360488    4920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1226 14:04:49.428178    4920 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1226 14:04:49.450005    4920 out.go:177] 
	W1226 14:04:49.470933    4920 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1226 14:04:49.470970    4920 out.go:239] * 
	* 
	W1226 14:04:49.474308    4920 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 14:04:49.495576    4920 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-212000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-212000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9",
	        "Created": "2023-12-26T21:57:14.231378159Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53221,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:57:14.44476773Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/hosts",
	        "LogPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9-json.log",
	        "Name": "/ingress-addon-legacy-212000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-212000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-212000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad-init/diff:/var/lib/docker/overlay2/9504b64c51e562d355bf6588d6f3a8de52c401736ff8b5d6bc5c642b8ed6a207/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-212000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-212000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-212000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-212000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-212000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6cfee8a401de57eac3c02136fab23478d742649b5313ba558204f08f2d4ceef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6cfee8a401d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-212000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "755360b2e3ae",
	                        "ingress-addon-legacy-212000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "f18f3794c7e5fddb27c76273f30f639278bf224b14dcb34feb82a422c43b9ebc",
	                    "EndpointID": "850cd54f9d707054f402b3965f5b70312c302c6c4b4b7e112f185020ad080ce7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-212000 -n ingress-addon-legacy-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-212000 -n ingress-addon-legacy-212000: exit status 6 (369.771136ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:04:49.936343    4978 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-212000" does not appear in /Users/jenkins/minikube-integration/17857-1142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-212000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (114.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-212000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-212000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9",
	        "Created": "2023-12-26T21:57:14.231378159Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53221,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:57:14.44476773Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/hosts",
	        "LogPath": "/var/lib/docker/containers/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9/755360b2e3aeeb9d98803dbe341b3cbf61692a6a2ec8e0099c2e06d7558563e9-json.log",
	        "Name": "/ingress-addon-legacy-212000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-212000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-212000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad-init/diff:/var/lib/docker/overlay2/9504b64c51e562d355bf6588d6f3a8de52c401736ff8b5d6bc5c642b8ed6a207/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4a9a8fc432a420afdcc03c7ff045d48660305f8f5fed07ed90a8137d52cd1ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-212000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-212000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-212000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-212000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-212000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6cfee8a401de57eac3c02136fab23478d742649b5313ba558204f08f2d4ceef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6cfee8a401d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-212000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "755360b2e3ae",
	                        "ingress-addon-legacy-212000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "f18f3794c7e5fddb27c76273f30f639278bf224b14dcb34feb82a422c43b9ebc",
	                    "EndpointID": "850cd54f9d707054f402b3965f5b70312c302c6c4b4b7e112f185020ad080ce7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-212000 -n ingress-addon-legacy-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-212000 -n ingress-addon-legacy-212000: exit status 6 (369.936673ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:04:50.359882    4990 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-212000" does not appear in /Users/jenkins/minikube-integration/17857-1142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-212000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (893.17s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-581000 ssh -- ls /minikube-host
E1226 14:08:46.179261    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:09:36.152070    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:10:09.231691    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:13:46.174919    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:14:36.145514    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:15:59.200036    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:18:46.220224    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:19:36.193260    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-581000 ssh -- ls /minikube-host: signal: killed (14m52.746420351s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-581000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-581000
helpers_test.go:235: (dbg) docker inspect mount-start-1-581000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a25c0fcf2dc7ac9d20f4acc095ba339aee11f2b72db4179ba478d82ee530c20a",
	        "Created": "2023-12-26T22:08:29.656317258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98805,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:08:29.864874763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/a25c0fcf2dc7ac9d20f4acc095ba339aee11f2b72db4179ba478d82ee530c20a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a25c0fcf2dc7ac9d20f4acc095ba339aee11f2b72db4179ba478d82ee530c20a/hostname",
	        "HostsPath": "/var/lib/docker/containers/a25c0fcf2dc7ac9d20f4acc095ba339aee11f2b72db4179ba478d82ee530c20a/hosts",
	        "LogPath": "/var/lib/docker/containers/a25c0fcf2dc7ac9d20f4acc095ba339aee11f2b72db4179ba478d82ee530c20a/a25c0fcf2dc7ac9d20f4acc095ba339aee11f2b72db4179ba478d82ee530c20a-json.log",
	        "Name": "/mount-start-1-581000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-1-581000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-581000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2a468fd88a9e6a2ff0de445cf006f96d787d0a6d79a843827ca16a0a40a8e84-init/diff:/var/lib/docker/overlay2/9504b64c51e562d355bf6588d6f3a8de52c401736ff8b5d6bc5c642b8ed6a207/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2a468fd88a9e6a2ff0de445cf006f96d787d0a6d79a843827ca16a0a40a8e84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2a468fd88a9e6a2ff0de445cf006f96d787d0a6d79a843827ca16a0a40a8e84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2a468fd88a9e6a2ff0de445cf006f96d787d0a6d79a843827ca16a0a40a8e84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-581000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-581000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-581000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-581000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-581000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c479fb01feca12ef1c24a35417deb3aa08b325c40e40f99a8d16ac47bd5569a3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50771"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50772"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50773"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50769"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50770"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c479fb01feca",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-581000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a25c0fcf2dc7",
	                        "mount-start-1-581000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "NetworkID": "6685105f2d41537144a7363d303854eebc3ed5c6fc2b7721772dbf568efa2a42",
	                    "EndpointID": "16db72f9ee27db816a32b69543a3240323130e2ab0785c7e399701048227e05f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-581000 -n mount-start-1-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-581000 -n mount-start-1-581000: exit status 6 (370.883187ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:23:28.331061    6745 status.go:415] kubeconfig endpoint: extract IP: "mount-start-1-581000" does not appear in /Users/jenkins/minikube-integration/17857-1142/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-581000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (893.17s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (756s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1226 14:26:49.268319    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:28:46.213768    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:29:36.186164    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:32:39.303819    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:33:46.275432    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:34:36.247777    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.82475287s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-053000 in cluster multinode-053000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:24:37.344737    6851 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:24:37.345004    6851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:24:37.345010    6851 out.go:309] Setting ErrFile to fd 2...
	I1226 14:24:37.345015    6851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:24:37.345201    6851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:24:37.346626    6851 out.go:303] Setting JSON to false
	I1226 14:24:37.368968    6851 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3247,"bootTime":1703626230,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 14:24:37.369088    6851 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 14:24:37.391253    6851 out.go:177] * [multinode-053000] minikube v1.32.0 on Darwin 14.2.1
	I1226 14:24:37.434759    6851 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 14:24:37.434830    6851 notify.go:220] Checking for updates...
	I1226 14:24:37.456908    6851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 14:24:37.477904    6851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 14:24:37.499680    6851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 14:24:37.520701    6851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 14:24:37.541695    6851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 14:24:37.563164    6851 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 14:24:37.618875    6851 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 14:24:37.619025    6851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 14:24:37.718038    6851 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-26 22:24:37.708084453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 14:24:37.739751    6851 out.go:177] * Using the docker driver based on user configuration
	I1226 14:24:37.760644    6851 start.go:298] selected driver: docker
	I1226 14:24:37.760680    6851 start.go:902] validating driver "docker" against <nil>
	I1226 14:24:37.760696    6851 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 14:24:37.765139    6851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 14:24:37.863554    6851 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-12-26 22:24:37.854589535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 14:24:37.863732    6851 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 14:24:37.863921    6851 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 14:24:37.885079    6851 out.go:177] * Using Docker Desktop driver with root privileges
	I1226 14:24:37.906861    6851 cni.go:84] Creating CNI manager for ""
	I1226 14:24:37.906892    6851 cni.go:136] 0 nodes found, recommending kindnet
	I1226 14:24:37.906916    6851 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 14:24:37.906942    6851 start_flags.go:323] config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 14:24:37.950708    6851 out.go:177] * Starting control plane node multinode-053000 in cluster multinode-053000
	I1226 14:24:37.973931    6851 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 14:24:37.995933    6851 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 14:24:38.037889    6851 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:24:38.037968    6851 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 14:24:38.037990    6851 cache.go:56] Caching tarball of preloaded images
	I1226 14:24:38.037988    6851 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 14:24:38.038221    6851 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 14:24:38.038243    6851 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 14:24:38.039825    6851 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/multinode-053000/config.json ...
	I1226 14:24:38.039927    6851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/multinode-053000/config.json: {Name:mk360dd7f83cb16adf37956fd979a7b15c41f288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 14:24:38.089337    6851 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 14:24:38.089357    6851 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 14:24:38.089385    6851 cache.go:194] Successfully downloaded all kic artifacts
	I1226 14:24:38.089426    6851 start.go:365] acquiring machines lock for multinode-053000: {Name:mk82cdb133de64b89b280b825892397413990144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 14:24:38.089576    6851 start.go:369] acquired machines lock for "multinode-053000" in 137.162µs
	I1226 14:24:38.089602    6851 start.go:93] Provisioning new machine with config: &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-053000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 14:24:38.089682    6851 start.go:125] createHost starting for "" (driver="docker")
	I1226 14:24:38.110831    6851 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 14:24:38.111202    6851 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1226 14:24:38.111262    6851 client.go:168] LocalClient.Create starting
	I1226 14:24:38.111397    6851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 14:24:38.111460    6851 main.go:141] libmachine: Decoding PEM data...
	I1226 14:24:38.111482    6851 main.go:141] libmachine: Parsing certificate...
	I1226 14:24:38.111550    6851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 14:24:38.111599    6851 main.go:141] libmachine: Decoding PEM data...
	I1226 14:24:38.111613    6851 main.go:141] libmachine: Parsing certificate...
	I1226 14:24:38.112221    6851 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 14:24:38.163679    6851 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 14:24:38.163764    6851 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1226 14:24:38.163783    6851 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1226 14:24:38.213162    6851 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1226 14:24:38.213185    6851 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1226 14:24:38.213205    6851 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1226 14:24:38.213337    6851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:24:38.264411    6851 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:24:38.264796    6851 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00231cfd0}
	I1226 14:24:38.264813    6851 network_create.go:124] attempt to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1226 14:24:38.264886    6851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	W1226 14:24:38.315185    6851 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000 returned with exit code 1
	W1226 14:24:38.315233    6851 network_create.go:149] failed to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1226 14:24:38.315252    6851 network_create.go:116] failed to create docker network multinode-053000 192.168.58.0/24, will retry: subnet is taken
	I1226 14:24:38.316665    6851 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:24:38.317071    6851 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022544e0}
	I1226 14:24:38.317083    6851 network_create.go:124] attempt to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1226 14:24:38.317152    6851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1226 14:24:38.402238    6851 network_create.go:108] docker network multinode-053000 192.168.67.0/24 created
	I1226 14:24:38.402279    6851 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-053000" container
	I1226 14:24:38.402386    6851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 14:24:38.453079    6851 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1226 14:24:38.504462    6851 oci.go:103] Successfully created a docker volume multinode-053000
	I1226 14:24:38.504584    6851 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 14:24:38.905018    6851 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1226 14:24:38.905068    6851 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:24:38.905081    6851 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 14:24:38.905169    6851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 14:30:38.108796    6851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:30:38.108928    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:38.163870    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:38.163991    6851 retry.go:31] will retry after 283.512865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:38.448992    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:38.502402    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:38.502496    6851 retry.go:31] will retry after 346.168101ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:38.849223    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:38.901582    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:38.901695    6851 retry.go:31] will retry after 658.244839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:39.561325    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:39.614944    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:39.615061    6851 retry.go:31] will retry after 462.478325ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:40.079807    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:40.130176    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:30:40.130286    6851 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:30:40.130305    6851 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:40.130363    6851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:30:40.130438    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:40.179752    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:40.179845    6851 retry.go:31] will retry after 371.792327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:40.552198    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:40.605003    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:40.605091    6851 retry.go:31] will retry after 284.334924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:40.891649    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:40.942928    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:40.943022    6851 retry.go:31] will retry after 474.389793ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:41.419764    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:41.470635    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:30:41.470728    6851 retry.go:31] will retry after 467.708725ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:41.939388    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:30:41.994210    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:30:41.994304    6851 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:30:41.994324    6851 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:41.994338    6851 start.go:128] duration metric: createHost completed in 6m3.908954655s
	I1226 14:30:41.994345    6851 start.go:83] releasing machines lock for "multinode-053000", held for 6m3.909075518s
	W1226 14:30:41.994358    6851 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I1226 14:30:41.994775    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:42.045759    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:42.045814    6851 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	W1226 14:30:42.045890    6851 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1226 14:30:42.045903    6851 start.go:709] Will try again in 5 seconds ...
	I1226 14:30:47.046865    6851 start.go:365] acquiring machines lock for multinode-053000: {Name:mk82cdb133de64b89b280b825892397413990144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 14:30:47.047106    6851 start.go:369] acquired machines lock for "multinode-053000" in 144.081µs
	I1226 14:30:47.047138    6851 start.go:96] Skipping create...Using existing machine configuration
	I1226 14:30:47.047152    6851 fix.go:54] fixHost starting: 
	I1226 14:30:47.047627    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:47.099973    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:47.100029    6851 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:47.100052    6851 fix.go:107] machineExists: false. err=machine does not exist
	I1226 14:30:47.120686    6851 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1226 14:30:47.164306    6851 delete.go:124] DEMOLISHING multinode-053000 ...
	I1226 14:30:47.164497    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:47.215718    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:30:47.215763    6851 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:47.215781    6851 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:47.216164    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:47.265919    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:47.265989    6851 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:47.266098    6851 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:30:47.315508    6851 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:30:47.315536    6851 kic.go:371] could not find the container multinode-053000 to remove it. will try anyways
	I1226 14:30:47.315608    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:47.368481    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:30:47.368522    6851 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:47.368613    6851 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1226 14:30:47.418992    6851 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 14:30:47.419035    6851 oci.go:650] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:48.419197    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:48.469064    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:48.469118    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:48.469130    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:48.469154    6851 retry.go:31] will retry after 369.016809ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:48.838604    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:48.890919    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:48.890971    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:48.890982    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:48.891006    6851 retry.go:31] will retry after 806.010338ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:49.697440    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:49.751134    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:49.751184    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:49.751195    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:49.751218    6851 retry.go:31] will retry after 604.216729ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:50.356230    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:50.408837    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:50.408883    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:50.408898    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:50.408923    6851 retry.go:31] will retry after 2.137352203s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:52.547224    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:52.600335    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:52.600384    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:52.600393    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:52.600417    6851 retry.go:31] will retry after 2.691835581s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:55.292518    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:55.346376    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:55.346419    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:55.346428    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:55.346454    6851 retry.go:31] will retry after 2.876664458s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:58.225432    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:30:58.281834    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:30:58.281881    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:30:58.281895    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:30:58.281918    6851 retry.go:31] will retry after 7.337959991s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:31:05.622142    6851 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:31:05.677068    6851 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:31:05.677113    6851 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:31:05.677130    6851 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:31:05.677160    6851 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1226 14:31:05.677236    6851 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1226 14:31:05.727411    6851 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:31:05.777275    6851 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:31:05.777383    6851 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:31:05.827496    6851 cli_runner.go:164] Run: docker network rm multinode-053000
	I1226 14:31:05.931371    6851 fix.go:114] Sleeping 1 second for extra luck!
	I1226 14:31:06.932046    6851 start.go:125] createHost starting for "" (driver="docker")
	I1226 14:31:06.954140    6851 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 14:31:06.954323    6851 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1226 14:31:06.954363    6851 client.go:168] LocalClient.Create starting
	I1226 14:31:06.954566    6851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 14:31:06.954657    6851 main.go:141] libmachine: Decoding PEM data...
	I1226 14:31:06.954686    6851 main.go:141] libmachine: Parsing certificate...
	I1226 14:31:06.954762    6851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 14:31:06.954832    6851 main.go:141] libmachine: Decoding PEM data...
	I1226 14:31:06.954848    6851 main.go:141] libmachine: Parsing certificate...
	I1226 14:31:06.955544    6851 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 14:31:07.008640    6851 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 14:31:07.008725    6851 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1226 14:31:07.008745    6851 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1226 14:31:07.058047    6851 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1226 14:31:07.058079    6851 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1226 14:31:07.058092    6851 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1226 14:31:07.058239    6851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:31:07.109865    6851 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:31:07.111446    6851 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:31:07.112858    6851 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:31:07.113307    6851 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004f6d70}
	I1226 14:31:07.113319    6851 network_create.go:124] attempt to create docker network multinode-053000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1226 14:31:07.113386    6851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1226 14:31:07.199032    6851 network_create.go:108] docker network multinode-053000 192.168.76.0/24 created
	I1226 14:31:07.199070    6851 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-053000" container
	I1226 14:31:07.199175    6851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 14:31:07.252106    6851 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1226 14:31:07.301778    6851 oci.go:103] Successfully created a docker volume multinode-053000
	I1226 14:31:07.301923    6851 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 14:31:07.627728    6851 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1226 14:31:07.627755    6851 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:31:07.627783    6851 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 14:31:07.627897    6851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 14:37:07.018091    6851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:37:07.018217    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:07.071151    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:07.071268    6851 retry.go:31] will retry after 290.103713ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:07.361914    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:07.432755    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:07.432847    6851 retry.go:31] will retry after 281.134592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:07.714937    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:07.766855    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:07.766950    6851 retry.go:31] will retry after 313.070469ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:08.080775    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:08.133905    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:37:08.134010    6851 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:37:08.134029    6851 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:08.134082    6851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:37:08.134132    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:08.183576    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:08.183681    6851 retry.go:31] will retry after 342.525377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:08.527894    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:08.580714    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:08.580834    6851 retry.go:31] will retry after 258.66176ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:08.840661    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:08.894218    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:08.894320    6851 retry.go:31] will retry after 318.405765ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:09.213361    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:09.266069    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:09.266164    6851 retry.go:31] will retry after 737.745975ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:10.006299    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:10.060502    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:37:10.060619    6851 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:37:10.060639    6851 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:10.060648    6851 start.go:128] duration metric: createHost completed in 6m3.066101176s
	I1226 14:37:10.060715    6851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:37:10.060772    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:10.109890    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:10.109984    6851 retry.go:31] will retry after 288.664585ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:10.400674    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:10.454036    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:10.454126    6851 retry.go:31] will retry after 485.104899ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:10.941322    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:10.995072    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:10.995175    6851 retry.go:31] will retry after 527.276049ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:11.524127    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:11.574742    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:37:11.574855    6851 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:37:11.574875    6851 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:11.574933    6851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:37:11.574986    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:11.623933    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:11.624030    6851 retry.go:31] will retry after 200.25431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:11.826645    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:11.879493    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:11.879585    6851 retry.go:31] will retry after 308.299488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:12.188614    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:12.242900    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:37:12.242992    6851 retry.go:31] will retry after 702.167941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:12.945915    6851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:37:12.997368    6851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:37:12.997465    6851 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:37:12.997489    6851 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:37:12.997500    6851 fix.go:56] fixHost completed within 6m25.88812291s
	I1226 14:37:12.997506    6851 start.go:83] releasing machines lock for "multinode-053000", held for 6m25.888160437s
	W1226 14:37:12.997588    6851 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 14:37:13.040849    6851 out.go:177] 
	W1226 14:37:13.063050    6851 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 14:37:13.063098    6851 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1226 14:37:13.063233    6851 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1226 14:37:13.106747    6851 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (107.346653ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:37:13.344244    7188 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (756.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (107.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (92.629313ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-053000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- rollout status deployment/busybox: exit status 1 (92.147899ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.730033ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.568672ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.337983ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.911753ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.690501ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.369466ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.172769ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.335629ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.390928ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.188649ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1226 14:38:46.272921    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.161236ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (92.519415ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.io: exit status 1 (92.417058ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default: exit status 1 (92.800334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (91.636034ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (107.656067ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:00.423678    7262 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (107.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (91.47809ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (106.814947ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:00.676201    7271 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-053000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-053000 -v 3 --alsologtostderr: exit status 80 (198.268407ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:39:00.731712    7275 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:39:00.732011    7275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:00.732017    7275 out.go:309] Setting ErrFile to fd 2...
	I1226 14:39:00.732021    7275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:00.732210    7275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:39:00.732563    7275 mustload.go:65] Loading cluster: multinode-053000
	I1226 14:39:00.732841    7275 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:39:00.733220    7275 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:00.782983    7275 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:00.805086    7275 out.go:177] 
	W1226 14:39:00.826822    7275 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:39:00.826849    7275 out.go:239] * 
	* 
	W1226 14:39:00.830545    7275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 14:39:00.851818    7275 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-053000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (106.80076ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:01.036375    7281 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-053000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-053000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (35.78856ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-053000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-053000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-053000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (107.96156ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:01.233933    7288 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-053000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-581000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-053000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-053000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-053000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (129.628165ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:01.597068    7300 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --output json --alsologtostderr: exit status 7 (106.634201ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-053000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:39:01.652234    7304 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:39:01.652567    7304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:01.652573    7304 out.go:309] Setting ErrFile to fd 2...
	I1226 14:39:01.652577    7304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:01.652762    7304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:39:01.652971    7304 out.go:303] Setting JSON to true
	I1226 14:39:01.653015    7304 mustload.go:65] Loading cluster: multinode-053000
	I1226 14:39:01.653046    7304 notify.go:220] Checking for updates...
	I1226 14:39:01.653304    7304 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:39:01.653315    7304 status.go:255] checking status of multinode-053000 ...
	I1226 14:39:01.653715    7304 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:01.703786    7304 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:01.703836    7304 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1226 14:39:01.703854    7304 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1226 14:39:01.703868    7304 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:39:01.703874    7304 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-053000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (106.527297ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:01.864478    7310 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 node stop m03: exit status 85 (148.025144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status: exit status 7 (106.703954ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:02.119874    7316 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:39:02.119886    7316 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr: exit status 7 (106.838669ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:39:02.175122    7320 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:39:02.175342    7320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:02.175348    7320 out.go:309] Setting ErrFile to fd 2...
	I1226 14:39:02.175352    7320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:02.175561    7320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:39:02.175742    7320 out.go:303] Setting JSON to false
	I1226 14:39:02.175767    7320 mustload.go:65] Loading cluster: multinode-053000
	I1226 14:39:02.175801    7320 notify.go:220] Checking for updates...
	I1226 14:39:02.176059    7320 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:39:02.176070    7320 status.go:255] checking status of multinode-053000 ...
	I1226 14:39:02.176518    7320 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:02.226799    7320 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:02.226866    7320 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1226 14:39:02.226884    7320 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1226 14:39:02.226903    7320 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:39:02.226910    7320 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:261: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:265: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (106.048266ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:02.387072    7326 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 node start m03 --alsologtostderr: exit status 85 (146.559304ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:39:02.498388    7332 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:39:02.498781    7332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:02.498787    7332 out.go:309] Setting ErrFile to fd 2...
	I1226 14:39:02.498792    7332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:02.498992    7332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:39:02.499349    7332 mustload.go:65] Loading cluster: multinode-053000
	I1226 14:39:02.499641    7332 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:39:02.521874    7332 out.go:177] 
	W1226 14:39:02.542753    7332 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1226 14:39:02.542777    7332 out.go:239] * 
	* 
	W1226 14:39:02.546493    7332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 14:39:02.567558    7332 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1226 14:39:02.498388    7332 out.go:296] Setting OutFile to fd 1 ...
I1226 14:39:02.498781    7332 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 14:39:02.498787    7332 out.go:309] Setting ErrFile to fd 2...
I1226 14:39:02.498792    7332 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 14:39:02.498992    7332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
I1226 14:39:02.499349    7332 mustload.go:65] Loading cluster: multinode-053000
I1226 14:39:02.499641    7332 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 14:39:02.521874    7332 out.go:177] 
W1226 14:39:02.542753    7332 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1226 14:39:02.542777    7332 out.go:239] * 
* 
W1226 14:39:02.546493    7332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1226 14:39:02.567558    7332 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status: exit status 7 (109.784576ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:02.699446    7334 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:39:02.699457    7334 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-053000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "a45a12338cbdceb3c6b1f43dfcc67b01617418d44142b6dff7b2e4f52844ece0",
	        "Created": "2023-12-26T22:31:07.160542463Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (106.155515ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:39:02.859985    7340 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (791.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-053000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-053000
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-053000: exit status 82 (13.206220127s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-053000" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr
E1226 14:39:36.245264    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:43:29.326375    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:43:46.271384    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:44:36.243374    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:48:46.269805    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:49:19.300020    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 14:49:36.241752    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m58.222921023s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-053000 in cluster multinode-053000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:39:16.176775    7360 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:39:16.176989    7360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:16.176994    7360 out.go:309] Setting ErrFile to fd 2...
	I1226 14:39:16.176998    7360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:39:16.177178    7360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:39:16.178535    7360 out.go:303] Setting JSON to false
	I1226 14:39:16.200801    7360 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4126,"bootTime":1703626230,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 14:39:16.200917    7360 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 14:39:16.223147    7360 out.go:177] * [multinode-053000] minikube v1.32.0 on Darwin 14.2.1
	I1226 14:39:16.265933    7360 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 14:39:16.266030    7360 notify.go:220] Checking for updates...
	I1226 14:39:16.287749    7360 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 14:39:16.308919    7360 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 14:39:16.330930    7360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 14:39:16.352759    7360 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 14:39:16.373972    7360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 14:39:16.396755    7360 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:39:16.396935    7360 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 14:39:16.454011    7360 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 14:39:16.454156    7360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 14:39:16.555628    7360 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-12-26 22:39:16.545849476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 14:39:16.599191    7360 out.go:177] * Using the docker driver based on existing profile
	I1226 14:39:16.621216    7360 start.go:298] selected driver: docker
	I1226 14:39:16.621247    7360 start.go:902] validating driver "docker" against &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-053000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 14:39:16.621372    7360 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 14:39:16.621586    7360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 14:39:16.724319    7360 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-12-26 22:39:16.715038754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 14:39:16.727414    7360 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 14:39:16.727488    7360 cni.go:84] Creating CNI manager for ""
	I1226 14:39:16.727497    7360 cni.go:136] 1 nodes found, recommending kindnet
	I1226 14:39:16.727506    7360 start_flags.go:323] config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 14:39:16.769879    7360 out.go:177] * Starting control plane node multinode-053000 in cluster multinode-053000
	I1226 14:39:16.791858    7360 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 14:39:16.834872    7360 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 14:39:16.856791    7360 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:39:16.856868    7360 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 14:39:16.856895    7360 cache.go:56] Caching tarball of preloaded images
	I1226 14:39:16.856893    7360 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 14:39:16.857125    7360 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 14:39:16.857144    7360 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 14:39:16.857737    7360 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/multinode-053000/config.json ...
	I1226 14:39:16.910544    7360 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 14:39:16.910565    7360 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 14:39:16.910584    7360 cache.go:194] Successfully downloaded all kic artifacts
	I1226 14:39:16.910642    7360 start.go:365] acquiring machines lock for multinode-053000: {Name:mk82cdb133de64b89b280b825892397413990144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 14:39:16.910737    7360 start.go:369] acquired machines lock for "multinode-053000" in 72.995µs
	I1226 14:39:16.910760    7360 start.go:96] Skipping create...Using existing machine configuration
	I1226 14:39:16.910768    7360 fix.go:54] fixHost starting: 
	I1226 14:39:16.911011    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:16.961832    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:16.961878    7360 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:16.961899    7360 fix.go:107] machineExists: false. err=machine does not exist
	I1226 14:39:16.983566    7360 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1226 14:39:17.005526    7360 delete.go:124] DEMOLISHING multinode-053000 ...
	I1226 14:39:17.005717    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:17.056561    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:39:17.056615    7360 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:17.056639    7360 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:17.056992    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:17.107403    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:17.107453    7360 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:17.107531    7360 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:39:17.157304    7360 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:39:17.157335    7360 kic.go:371] could not find the container multinode-053000 to remove it. will try anyways
	I1226 14:39:17.157407    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:17.206598    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:39:17.206644    7360 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:17.206729    7360 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1226 14:39:17.256320    7360 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 14:39:17.256347    7360 oci.go:650] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:18.257371    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:18.312915    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:18.312977    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:18.312988    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:18.313025    7360 retry.go:31] will retry after 552.435622ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:18.867207    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:18.921827    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:18.921881    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:18.921892    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:18.921916    7360 retry.go:31] will retry after 630.563361ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:19.553609    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:19.609464    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:19.609506    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:19.609520    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:19.609541    7360 retry.go:31] will retry after 1.299683285s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:20.910991    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:20.964724    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:20.964758    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:20.964767    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:20.964788    7360 retry.go:31] will retry after 1.819553572s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:22.785270    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:22.838499    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:22.838541    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:22.838552    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:22.838586    7360 retry.go:31] will retry after 3.289951579s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:26.130893    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:26.205601    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:26.205641    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:26.205649    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:26.205669    7360 retry.go:31] will retry after 5.644417906s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:31.851536    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:31.906768    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:31.906811    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:31.906818    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:31.906839    7360 retry.go:31] will retry after 3.913346533s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:35.820697    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:39:35.872862    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:39:35.872905    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:39:35.872913    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:39:35.872939    7360 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1226 14:39:35.873009    7360 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1226 14:39:35.923171    7360 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:39:35.973073    7360 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:39:35.973182    7360 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:39:36.022867    7360 cli_runner.go:164] Run: docker network rm multinode-053000
	I1226 14:39:36.123358    7360 fix.go:114] Sleeping 1 second for extra luck!
	I1226 14:39:37.125520    7360 start.go:125] createHost starting for "" (driver="docker")
	I1226 14:39:37.147494    7360 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 14:39:37.147648    7360 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1226 14:39:37.147697    7360 client.go:168] LocalClient.Create starting
	I1226 14:39:37.147903    7360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 14:39:37.148001    7360 main.go:141] libmachine: Decoding PEM data...
	I1226 14:39:37.148032    7360 main.go:141] libmachine: Parsing certificate...
	I1226 14:39:37.148135    7360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 14:39:37.148216    7360 main.go:141] libmachine: Decoding PEM data...
	I1226 14:39:37.148236    7360 main.go:141] libmachine: Parsing certificate...
	I1226 14:39:37.148971    7360 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 14:39:37.201613    7360 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 14:39:37.201692    7360 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1226 14:39:37.201710    7360 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1226 14:39:37.254474    7360 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1226 14:39:37.254506    7360 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1226 14:39:37.254516    7360 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1226 14:39:37.254640    7360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:39:37.306442    7360 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:39:37.306849    7360 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002522720}
	I1226 14:39:37.306865    7360 network_create.go:124] attempt to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1226 14:39:37.306928    7360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	W1226 14:39:37.356643    7360 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000 returned with exit code 1
	W1226 14:39:37.356678    7360 network_create.go:149] failed to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1226 14:39:37.356693    7360 network_create.go:116] failed to create docker network multinode-053000 192.168.58.0/24, will retry: subnet is taken
	I1226 14:39:37.358108    7360 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:39:37.358462    7360 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002462bb0}
	I1226 14:39:37.358479    7360 network_create.go:124] attempt to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1226 14:39:37.358543    7360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1226 14:39:37.443124    7360 network_create.go:108] docker network multinode-053000 192.168.67.0/24 created
	I1226 14:39:37.443160    7360 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-053000" container
	I1226 14:39:37.443276    7360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 14:39:37.494022    7360 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1226 14:39:37.544047    7360 oci.go:103] Successfully created a docker volume multinode-053000
	I1226 14:39:37.544166    7360 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 14:39:37.850186    7360 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1226 14:39:37.850240    7360 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:39:37.850253    7360 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 14:39:37.850362    7360 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 14:45:37.146949    7360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:45:37.147091    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:37.201011    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:37.201124    7360 retry.go:31] will retry after 217.505267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:37.419678    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:37.473429    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:37.473541    7360 retry.go:31] will retry after 450.521529ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:37.925369    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:37.978084    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:37.978184    7360 retry.go:31] will retry after 310.755877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:38.289425    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:38.342979    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:38.343076    7360 retry.go:31] will retry after 444.848681ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:38.788459    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:38.843068    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:45:38.843170    7360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:45:38.843186    7360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:38.843254    7360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:45:38.843316    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:38.892370    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:38.892472    7360 retry.go:31] will retry after 352.332842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:39.245454    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:39.299573    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:39.299674    7360 retry.go:31] will retry after 297.77158ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:39.599590    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:39.653947    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:39.654063    7360 retry.go:31] will retry after 693.922936ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:40.349196    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:40.403568    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:45:40.403680    7360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:45:40.403698    7360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:40.403716    7360 start.go:128] duration metric: createHost completed in 6m3.280062659s
	I1226 14:45:40.403778    7360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:45:40.403831    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:40.455267    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:40.455356    7360 retry.go:31] will retry after 277.029026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:40.734788    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:40.786533    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:40.786621    7360 retry.go:31] will retry after 274.897723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:41.063895    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:41.117323    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:41.117416    7360 retry.go:31] will retry after 343.589625ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:41.463285    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:41.516495    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:45:41.516602    7360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:45:41.516618    7360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:41.516670    7360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:45:41.516729    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:41.567035    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:41.567133    7360 retry.go:31] will retry after 298.336525ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:41.866265    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:41.919295    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:41.919391    7360 retry.go:31] will retry after 232.367556ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:42.152186    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:42.205453    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:45:42.205541    7360 retry.go:31] will retry after 821.686442ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:43.029049    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:45:43.080974    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:45:43.081075    7360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:45:43.081092    7360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:43.081103    7360 fix.go:56] fixHost completed within 6m26.172371191s
	I1226 14:45:43.081109    7360 start.go:83] releasing machines lock for "multinode-053000", held for 6m26.172399369s
	W1226 14:45:43.081123    7360 start.go:694] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 14:45:43.081201    7360 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 14:45:43.081207    7360 start.go:709] Will try again in 5 seconds ...
	I1226 14:45:48.081348    7360 start.go:365] acquiring machines lock for multinode-053000: {Name:mk82cdb133de64b89b280b825892397413990144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 14:45:48.081465    7360 start.go:369] acquired machines lock for "multinode-053000" in 93.056µs
	I1226 14:45:48.081486    7360 start.go:96] Skipping create...Using existing machine configuration
	I1226 14:45:48.081491    7360 fix.go:54] fixHost starting: 
	I1226 14:45:48.081758    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:48.132508    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:48.132555    7360 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:48.132573    7360 fix.go:107] machineExists: false. err=machine does not exist
	I1226 14:45:48.154029    7360 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1226 14:45:48.195955    7360 delete.go:124] DEMOLISHING multinode-053000 ...
	I1226 14:45:48.196169    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:48.246926    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:45:48.246969    7360 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:48.246993    7360 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:48.247352    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:48.297076    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:48.297132    7360 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:48.297208    7360 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:45:48.347284    7360 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:45:48.347316    7360 kic.go:371] could not find the container multinode-053000 to remove it. will try anyways
	I1226 14:45:48.347389    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:48.396947    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:45:48.396990    7360 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:48.397071    7360 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1226 14:45:48.446856    7360 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 14:45:48.446882    7360 oci.go:650] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:49.449278    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:49.503992    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:49.504035    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:49.504044    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:49.504068    7360 retry.go:31] will retry after 710.912358ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:50.215886    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:50.267379    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:50.267425    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:50.267434    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:50.267459    7360 retry.go:31] will retry after 1.074304405s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:51.344107    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:51.397693    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:51.397737    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:51.397746    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:51.397771    7360 retry.go:31] will retry after 854.086411ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:52.254202    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:52.307882    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:52.307935    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:52.307945    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:52.307970    7360 retry.go:31] will retry after 1.095888305s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:53.404431    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:53.457290    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:53.457341    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:53.457350    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:53.457372    7360 retry.go:31] will retry after 2.16201195s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:55.619867    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:55.672145    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:55.672187    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:55.672200    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:55.672232    7360 retry.go:31] will retry after 2.628212738s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:58.300742    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:45:58.356033    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:45:58.356075    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:45:58.356084    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:45:58.356110    7360 retry.go:31] will retry after 7.411228414s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:46:05.767717    7360 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:46:05.821414    7360 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:46:05.821459    7360 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:46:05.821482    7360 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:46:05.821514    7360 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1226 14:46:05.821583    7360 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1226 14:46:05.871783    7360 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:46:05.920968    7360 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:46:05.921070    7360 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:46:05.971645    7360 cli_runner.go:164] Run: docker network rm multinode-053000
	I1226 14:46:06.071750    7360 fix.go:114] Sleeping 1 second for extra luck!
	I1226 14:46:07.072318    7360 start.go:125] createHost starting for "" (driver="docker")
	I1226 14:46:07.095268    7360 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 14:46:07.095446    7360 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1226 14:46:07.095483    7360 client.go:168] LocalClient.Create starting
	I1226 14:46:07.095701    7360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 14:46:07.095793    7360 main.go:141] libmachine: Decoding PEM data...
	I1226 14:46:07.095820    7360 main.go:141] libmachine: Parsing certificate...
	I1226 14:46:07.095908    7360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 14:46:07.095982    7360 main.go:141] libmachine: Decoding PEM data...
	I1226 14:46:07.096000    7360 main.go:141] libmachine: Parsing certificate...
	I1226 14:46:07.117587    7360 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 14:46:07.170518    7360 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 14:46:07.170613    7360 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1226 14:46:07.170635    7360 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1226 14:46:07.221336    7360 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1226 14:46:07.221365    7360 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1226 14:46:07.221379    7360 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1226 14:46:07.221542    7360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:46:07.273238    7360 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:46:07.274829    7360 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:46:07.276404    7360 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:46:07.276759    7360 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d7b0}
	I1226 14:46:07.276771    7360 network_create.go:124] attempt to create docker network multinode-053000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1226 14:46:07.276853    7360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1226 14:46:07.361731    7360 network_create.go:108] docker network multinode-053000 192.168.76.0/24 created
	I1226 14:46:07.361772    7360 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-053000" container
	I1226 14:46:07.361880    7360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 14:46:07.412614    7360 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1226 14:46:07.462817    7360 oci.go:103] Successfully created a docker volume multinode-053000
	I1226 14:46:07.462943    7360 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 14:46:07.773352    7360 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1226 14:46:07.773389    7360 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:46:07.773403    7360 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 14:46:07.773524    7360 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 14:52:07.094935    7360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:52:07.095072    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:07.149615    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:07.149709    7360 retry.go:31] will retry after 303.049682ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:07.453460    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:07.504788    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:07.504901    7360 retry.go:31] will retry after 541.364178ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:08.047065    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:08.103464    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:08.103562    7360 retry.go:31] will retry after 450.295402ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:08.556193    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:08.610857    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:52:08.610977    7360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:52:08.610995    7360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:08.611054    7360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:52:08.611108    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:08.661382    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:08.661490    7360 retry.go:31] will retry after 241.287798ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:08.905125    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:08.959154    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:08.959260    7360 retry.go:31] will retry after 419.941576ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:09.381540    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:09.437101    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:09.437199    7360 retry.go:31] will retry after 831.071716ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:10.269345    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:10.324786    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:52:10.324888    7360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:52:10.324904    7360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:10.324919    7360 start.go:128] duration metric: createHost completed in 6m3.254486996s
	I1226 14:52:10.324989    7360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 14:52:10.325041    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:10.374761    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:10.374854    7360 retry.go:31] will retry after 236.714736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:10.613995    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:10.666907    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:10.667014    7360 retry.go:31] will retry after 359.823616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:11.029244    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:11.081152    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:11.081241    7360 retry.go:31] will retry after 419.242805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:11.501726    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:11.554782    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:11.554873    7360 retry.go:31] will retry after 680.443634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:12.236276    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:12.290075    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:52:12.290175    7360 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:52:12.290194    7360 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:12.290249    7360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 14:52:12.290312    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:12.341334    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:12.341428    7360 retry.go:31] will retry after 264.410302ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:12.608162    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:12.660755    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:12.660850    7360 retry.go:31] will retry after 307.107355ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:12.968929    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:13.022381    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:13.022478    7360 retry.go:31] will retry after 479.856612ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:13.503614    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:13.556818    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1226 14:52:13.556911    7360 retry.go:31] will retry after 552.314392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:14.109608    7360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1226 14:52:14.163341    7360 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1226 14:52:14.163448    7360 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1226 14:52:14.163464    7360 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:14.163475    7360 fix.go:56] fixHost completed within 6m26.084017346s
	I1226 14:52:14.163483    7360 start.go:83] releasing machines lock for "multinode-053000", held for 6m26.084042537s
	W1226 14:52:14.163556    7360 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1226 14:52:14.207491    7360 out.go:177] 
	W1226 14:52:14.229315    7360 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1226 14:52:14.229367    7360 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1226 14:52:14.229397    7360 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1226 14:52:14.251426    7360 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-053000" : exit status 52
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-053000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "41778915a67f3e052d6f6a2a5def54621f8aba018254c61659944bdd2329bb67",
	        "Created": "2023-12-26T22:46:07.323615258Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (107.517488ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:52:14.577125    7727 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (791.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 node delete m03: exit status 80 (199.258626ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr: exit status 7 (107.192179ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:52:14.831925    7735 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:52:14.832221    7735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:52:14.832227    7735 out.go:309] Setting ErrFile to fd 2...
	I1226 14:52:14.832232    7735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:52:14.832416    7735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:52:14.832600    7735 out.go:303] Setting JSON to false
	I1226 14:52:14.832628    7735 mustload.go:65] Loading cluster: multinode-053000
	I1226 14:52:14.832658    7735 notify.go:220] Checking for updates...
	I1226 14:52:14.832919    7735 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:52:14.832931    7735 status.go:255] checking status of multinode-053000 ...
	I1226 14:52:14.833318    7735 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:14.883957    7735 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:14.884002    7735 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1226 14:52:14.884020    7735 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1226 14:52:14.884034    7735 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:52:14.884039    7735 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "41778915a67f3e052d6f6a2a5def54621f8aba018254c61659944bdd2329bb67",
	        "Created": "2023-12-26T22:46:07.323615258Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (107.023793ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:52:15.086520    7741 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (14.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 stop: exit status 82 (14.401531817s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status: exit status 7 (107.750758ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:52:29.596168    7768 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:52:29.596182    7768 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr: exit status 7 (106.238069ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:52:29.651204    7772 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:52:29.651512    7772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:52:29.651519    7772 out.go:309] Setting ErrFile to fd 2...
	I1226 14:52:29.651523    7772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:52:29.651711    7772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:52:29.651897    7772 out.go:303] Setting JSON to false
	I1226 14:52:29.651920    7772 mustload.go:65] Loading cluster: multinode-053000
	I1226 14:52:29.651953    7772 notify.go:220] Checking for updates...
	I1226 14:52:29.652200    7772 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:52:29.652212    7772 status.go:255] checking status of multinode-053000 ...
	I1226 14:52:29.652631    7772 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:29.702375    7772 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:29.702439    7772 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1226 14:52:29.702457    7772 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1226 14:52:29.702474    7772 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1226 14:52:29.702482    7772 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "41778915a67f3e052d6f6a2a5def54621f8aba018254c61659944bdd2329bb67",
	        "Created": "2023-12-26T22:46:07.323615258Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (107.265217ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:52:29.864711    7778 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (14.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (127.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1226 14:53:46.268270    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:54:36.242090    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m7.371827147s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-053000 in cluster multinode-053000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 14:52:30.028206    7784 out.go:296] Setting OutFile to fd 1 ...
	I1226 14:52:30.028410    7784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:52:30.028417    7784 out.go:309] Setting ErrFile to fd 2...
	I1226 14:52:30.028421    7784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 14:52:30.028607    7784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 14:52:30.029954    7784 out.go:303] Setting JSON to false
	I1226 14:52:30.052303    7784 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4920,"bootTime":1703626230,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 14:52:30.052420    7784 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 14:52:30.074520    7784 out.go:177] * [multinode-053000] minikube v1.32.0 on Darwin 14.2.1
	I1226 14:52:30.139163    7784 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 14:52:30.117221    7784 notify.go:220] Checking for updates...
	I1226 14:52:30.182082    7784 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 14:52:30.203025    7784 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 14:52:30.223970    7784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 14:52:30.246160    7784 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 14:52:30.267108    7784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 14:52:30.288840    7784 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 14:52:30.289653    7784 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 14:52:30.345883    7784 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 14:52:30.346035    7784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 14:52:30.446784    7784 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-12-26 22:52:30.436529453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 14:52:30.489185    7784 out.go:177] * Using the docker driver based on existing profile
	I1226 14:52:30.510223    7784 start.go:298] selected driver: docker
	I1226 14:52:30.510253    7784 start.go:902] validating driver "docker" against &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-053000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 14:52:30.510365    7784 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 14:52:30.510578    7784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 14:52:30.612271    7784 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-12-26 22:52:30.602801966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/
docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 14:52:30.615428    7784 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 14:52:30.615515    7784 cni.go:84] Creating CNI manager for ""
	I1226 14:52:30.615526    7784 cni.go:136] 1 nodes found, recommending kindnet
	I1226 14:52:30.615535    7784 start_flags.go:323] config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 14:52:30.657284    7784 out.go:177] * Starting control plane node multinode-053000 in cluster multinode-053000
	I1226 14:52:30.680417    7784 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 14:52:30.723325    7784 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 14:52:30.744325    7784 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:52:30.744412    7784 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 14:52:30.744435    7784 cache.go:56] Caching tarball of preloaded images
	I1226 14:52:30.744425    7784 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 14:52:30.744625    7784 preload.go:174] Found /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 14:52:30.744644    7784 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 14:52:30.744796    7784 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/multinode-053000/config.json ...
	I1226 14:52:30.795997    7784 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 14:52:30.796026    7784 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 14:52:30.796131    7784 cache.go:194] Successfully downloaded all kic artifacts
	I1226 14:52:30.796172    7784 start.go:365] acquiring machines lock for multinode-053000: {Name:mk82cdb133de64b89b280b825892397413990144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 14:52:30.796261    7784 start.go:369] acquired machines lock for "multinode-053000" in 67.916µs
	I1226 14:52:30.796285    7784 start.go:96] Skipping create...Using existing machine configuration
	I1226 14:52:30.796293    7784 fix.go:54] fixHost starting: 
	I1226 14:52:30.796519    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:30.846515    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:30.846563    7784 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:30.846585    7784 fix.go:107] machineExists: false. err=machine does not exist
	I1226 14:52:30.868465    7784 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1226 14:52:30.910825    7784 delete.go:124] DEMOLISHING multinode-053000 ...
	I1226 14:52:30.911017    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:30.962438    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:52:30.962491    7784 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:30.962508    7784 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:30.962852    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:31.012496    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:31.012560    7784 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:31.012643    7784 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:52:31.062062    7784 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:52:31.062097    7784 kic.go:371] could not find the container multinode-053000 to remove it. will try anyways
	I1226 14:52:31.062174    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:31.112265    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1226 14:52:31.112312    7784 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:31.112391    7784 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1226 14:52:31.161645    7784 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 14:52:31.161671    7784 oci.go:650] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:32.162871    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:32.216022    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:32.216069    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:32.216085    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:32.216122    7784 retry.go:31] will retry after 293.146937ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:32.511571    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:32.565135    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:32.565177    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:32.565188    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:32.565213    7784 retry.go:31] will retry after 456.064422ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:33.023541    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:33.075941    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:33.075985    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:33.075997    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:33.076028    7784 retry.go:31] will retry after 1.243754899s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:34.321077    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:34.372867    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:34.372910    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:34.372918    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:34.372943    7784 retry.go:31] will retry after 1.648088286s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:36.021798    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:36.075446    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:36.075487    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:36.075497    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:36.075520    7784 retry.go:31] will retry after 3.535786366s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:39.612072    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:39.664946    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:39.664993    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:39.665002    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:39.665022    7784 retry.go:31] will retry after 5.473300523s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:45.140686    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:45.191720    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:45.191767    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:45.191783    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:45.191804    7784 retry.go:31] will retry after 3.65112455s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:48.843407    7784 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1226 14:52:48.896790    7784 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1226 14:52:48.896840    7784 oci.go:662] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1226 14:52:48.896850    7784 oci.go:664] temporary error: container multinode-053000 status is  but expect it to be exited
	I1226 14:52:48.896878    7784 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1226 14:52:48.896949    7784 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1226 14:52:48.948674    7784 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1226 14:52:48.998564    7784 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1226 14:52:48.998671    7784 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:52:49.049403    7784 cli_runner.go:164] Run: docker network rm multinode-053000
	I1226 14:52:49.158657    7784 fix.go:114] Sleeping 1 second for extra luck!
	I1226 14:52:50.160173    7784 start.go:125] createHost starting for "" (driver="docker")
	I1226 14:52:50.182233    7784 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 14:52:50.182407    7784 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1226 14:52:50.182459    7784 client.go:168] LocalClient.Create starting
	I1226 14:52:50.182664    7784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/ca.pem
	I1226 14:52:50.182765    7784 main.go:141] libmachine: Decoding PEM data...
	I1226 14:52:50.182799    7784 main.go:141] libmachine: Parsing certificate...
	I1226 14:52:50.182897    7784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17857-1142/.minikube/certs/cert.pem
	I1226 14:52:50.182975    7784 main.go:141] libmachine: Decoding PEM data...
	I1226 14:52:50.182994    7784 main.go:141] libmachine: Parsing certificate...
	I1226 14:52:50.204453    7784 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 14:52:50.256761    7784 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 14:52:50.256852    7784 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1226 14:52:50.256871    7784 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1226 14:52:50.306458    7784 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1226 14:52:50.306488    7784 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1226 14:52:50.306504    7784 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1226 14:52:50.306646    7784 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 14:52:50.358922    7784 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:52:50.359280    7784 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002405540}
	I1226 14:52:50.359295    7784 network_create.go:124] attempt to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1226 14:52:50.359385    7784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	W1226 14:52:50.409867    7784 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000 returned with exit code 1
	W1226 14:52:50.409899    7784 network_create.go:149] failed to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1226 14:52:50.409926    7784 network_create.go:116] failed to create docker network multinode-053000 192.168.58.0/24, will retry: subnet is taken
	I1226 14:52:50.411305    7784 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1226 14:52:50.411693    7784 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025962f0}
	I1226 14:52:50.411705    7784 network_create.go:124] attempt to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1226 14:52:50.411769    7784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1226 14:52:50.497208    7784 network_create.go:108] docker network multinode-053000 192.168.67.0/24 created
	I1226 14:52:50.497246    7784 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-053000" container
	I1226 14:52:50.497385    7784 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 14:52:50.548429    7784 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1226 14:52:50.597763    7784 oci.go:103] Successfully created a docker volume multinode-053000
	I1226 14:52:50.597879    7784 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 14:52:50.896612    7784 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1226 14:52:50.896648    7784 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 14:52:50.896661    7784 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 14:52:50.896752    7784 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "fe809d0b2663664fc07ef85ce523e97a4d76c118363edee5fc84b514a4775c2d",
	        "Created": "2023-12-26T22:52:50.45857877Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (108.654695ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 14:54:37.512781    7914 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (127.65s)

                                                
                                    
x
+
TestScheduledStopUnix (300.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-060000 --memory=2048 --driver=docker 
E1226 14:58:46.311274    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 14:59:36.284445    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:00:09.365826    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-060000 --memory=2048 --driver=docker : signal: killed (5m0.003233295s)

                                                
                                                
-- stdout --
	* [scheduled-stop-060000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-060000 in cluster scheduled-stop-060000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-060000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-060000 in cluster scheduled-stop-060000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-12-26 15:02:22.113604 -0800 PST m=+4706.412945913
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-060000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-060000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-060000",
	        "Id": "478933cffa9b8d90e9b509608e6c61f5d785bbf8269c330e61cac97d8f7e4536",
	        "Created": "2023-12-26T22:57:23.235764982Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-060000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-060000 -n scheduled-stop-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-060000 -n scheduled-stop-060000: exit status 7 (113.602457ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 15:02:22.282097    8467 status.go:249] status error: host: state: unknown state "scheduled-stop-060000": docker container inspect scheduled-stop-060000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-060000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-060000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-060000
--- FAIL: TestScheduledStopUnix (300.93s)

                                                
                                    
x
+
TestSkaffold (300.94s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3250486478 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-892000 --memory=2600 --driver=docker 
E1226 15:03:46.308688    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:04:36.281069    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
E1226 15:05:59.338765    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-892000 --memory=2600 --driver=docker : signal: killed (4m57.985228423s)

                                                
                                                
-- stdout --
	* [skaffold-892000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-892000 in cluster skaffold-892000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-892000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-892000 in cluster skaffold-892000
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-12-26 15:07:23.044987 -0800 PST m=+5007.346985620
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-892000
helpers_test.go:235: (dbg) docker inspect skaffold-892000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-892000",
	        "Id": "1e8423ae338b985cd3213a6007b55d2146bb5eb1a55b5da8c84c56c1f1e49eba",
	        "Created": "2023-12-26T23:02:26.179695682Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-892000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-892000 -n skaffold-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-892000 -n skaffold-892000: exit status 7 (114.133487ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 15:07:23.215032    8649 status.go:249] status error: host: state: unknown state "skaffold-892000": docker container inspect skaffold-892000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-892000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-892000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-892000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-892000
--- FAIL: TestSkaffold (300.94s)

                                                
                                    
x
+
TestInsufficientStorage (300.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-867000 --memory=2048 --output=json --wait=true --driver=docker 
E1226 15:08:46.306015    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 15:09:36.278521    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-867000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003226092s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e7cde9c-53af-44d0-b87c-5eda5fb7d599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-867000] minikube v1.32.0 on Darwin 14.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c36c5f6f-36f4-4e56-a8b6-682abe794033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"2bfd809e-2d7d-49b3-b39b-46cd07a77e86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig"}}
	{"specversion":"1.0","id":"07784eca-0a01-4090-aff0-16b72ecfa84c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a24ea30c-0224-4b55-9da7-b1c8bcfcf295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"519c7255-bd21-4774-a5d4-5f6e48f2c989","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube"}}
	{"specversion":"1.0","id":"9ed63201-5a53-451a-b05d-9f3df5378487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ad998d26-70fc-440e-8d44-2d5a7975599b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"56b3bb3f-5937-402c-9ffa-cbd6357ef52f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4e16ec6e-225a-4f80-9d00-e56d8b06932b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6be2847f-1164-4f58-8aa4-32e8064acdee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"f7854368-8aee-4122-874a-65171f7334e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-867000 in cluster insufficient-storage-867000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fba6d906-71be-4a36-9515-29d4c9b2d405","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7505fe4d-434e-408b-ae8d-e8f676240e11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-867000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-867000 --output=json --layout=cluster: context deadline exceeded (705ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-867000
--- FAIL: TestInsufficientStorage (300.76s)

                                                
                                    

Test pass (143/190)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 44.2
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.34
10 TestDownloadOnly/v1.28.4/json-events 7.63
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.3
17 TestDownloadOnly/v1.29.0-rc.2/json-events 12.94
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.3
23 TestDownloadOnly/DeleteAll 0.76
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
25 TestDownloadOnlyKic 1.94
26 TestBinaryMirror 1.65
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
32 TestAddons/Setup 219.52
36 TestAddons/parallel/InspektorGadget 11.81
37 TestAddons/parallel/MetricsServer 5.77
38 TestAddons/parallel/HelmTiller 9.88
40 TestAddons/parallel/CSI 50.99
41 TestAddons/parallel/Headlamp 13.45
42 TestAddons/parallel/CloudSpanner 5.67
43 TestAddons/parallel/LocalPath 55.45
44 TestAddons/parallel/NvidiaDevicePlugin 5.62
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.1
49 TestAddons/StoppedEnableDisable 11.73
57 TestHyperKitDriverInstallOrUpdate 7.53
60 TestErrorSpam/setup 21.44
61 TestErrorSpam/start 2.04
62 TestErrorSpam/status 1.17
63 TestErrorSpam/pause 1.69
64 TestErrorSpam/unpause 1.81
65 TestErrorSpam/stop 2.19
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 74.89
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 36.28
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 9.24
77 TestFunctional/serial/CacheCmd/cache/add_local 1.6
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
81 TestFunctional/serial/CacheCmd/cache/cache_reload 3.17
82 TestFunctional/serial/CacheCmd/cache/delete 0.16
83 TestFunctional/serial/MinikubeKubectlCmd 0.55
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.76
85 TestFunctional/serial/ExtraConfig 40.06
86 TestFunctional/serial/ComponentHealth 0.06
87 TestFunctional/serial/LogsCmd 3.07
88 TestFunctional/serial/LogsFileCmd 3.01
89 TestFunctional/serial/InvalidService 4.65
91 TestFunctional/parallel/ConfigCmd 0.51
92 TestFunctional/parallel/DashboardCmd 11.96
93 TestFunctional/parallel/DryRun 1.4
94 TestFunctional/parallel/InternationalLanguage 0.56
95 TestFunctional/parallel/StatusCmd 1.23
100 TestFunctional/parallel/AddonsCmd 0.26
101 TestFunctional/parallel/PersistentVolumeClaim 26.22
103 TestFunctional/parallel/SSHCmd 0.74
104 TestFunctional/parallel/CpCmd 2.73
105 TestFunctional/parallel/MySQL 34.77
106 TestFunctional/parallel/FileSync 0.4
107 TestFunctional/parallel/CertSync 2.58
111 TestFunctional/parallel/NodeLabels 0.08
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
115 TestFunctional/parallel/License 0.51
116 TestFunctional/parallel/Version/short 0.12
117 TestFunctional/parallel/Version/components 0.67
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
122 TestFunctional/parallel/ImageCommands/ImageBuild 4.35
123 TestFunctional/parallel/ImageCommands/Setup 4.54
124 TestFunctional/parallel/DockerEnv/bash 1.64
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.75
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.72
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.18
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.55
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.59
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.69
135 TestFunctional/parallel/ServiceCmd/DeployApp 17.19
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
141 TestFunctional/parallel/ServiceCmd/List 0.6
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
143 TestFunctional/parallel/ServiceCmd/HTTPS 15
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.04
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
150 TestFunctional/parallel/ServiceCmd/Format 15
151 TestFunctional/parallel/ServiceCmd/URL 15
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
153 TestFunctional/parallel/ProfileCmd/profile_list 0.47
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
157 TestFunctional/parallel/MountCmd/VerifyCleanup 2.44
158 TestFunctional/delete_addon-resizer_images 0.13
159 TestFunctional/delete_my-image_image 0.05
160 TestFunctional/delete_minikube_cached_images 0.05
164 TestImageBuild/serial/Setup 21.82
165 TestImageBuild/serial/NormalBuild 3.24
166 TestImageBuild/serial/BuildWithBuildArg 1.33
167 TestImageBuild/serial/BuildWithDockerIgnore 1.12
168 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.09
178 TestJSONOutput/start/Command 34.08
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.59
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.63
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 10.9
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.77
203 TestKicCustomNetwork/create_custom_network 22.93
204 TestKicCustomNetwork/use_default_bridge_network 22.66
205 TestKicExistingNetwork 23.7
206 TestKicCustomSubnet 24.37
207 TestKicStaticIP 23.33
208 TestMainNoArgs 0.08
209 TestMinikubeProfile 49.16
212 TestMountStart/serial/StartWithMountFirst 7.26
232 TestPreload 163.66
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.71
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.68
x
+
TestDownloadOnly/v1.16.0/json-events (44.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (44.199291505s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (44.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-571000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-571000: exit status 85 (334.392935ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-571000 | jenkins | v1.32.0 | 26 Dec 23 13:43 PST |          |
	|         | -p download-only-571000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 13:43:55
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 13:43:55.701368    1614 out.go:296] Setting OutFile to fd 1 ...
	I1226 13:43:55.701671    1614 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:43:55.701676    1614 out.go:309] Setting ErrFile to fd 2...
	I1226 13:43:55.701681    1614 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:43:55.701865    1614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	W1226 13:43:55.701961    1614 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17857-1142/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17857-1142/.minikube/config/config.json: no such file or directory
	I1226 13:43:55.703724    1614 out.go:303] Setting JSON to true
	I1226 13:43:55.728033    1614 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":805,"bootTime":1703626230,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 13:43:55.728136    1614 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 13:43:55.749930    1614 out.go:97] [download-only-571000] minikube v1.32.0 on Darwin 14.2.1
	I1226 13:43:55.771900    1614 out.go:169] MINIKUBE_LOCATION=17857
	W1226 13:43:55.750176    1614 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball: no such file or directory
	I1226 13:43:55.750210    1614 notify.go:220] Checking for updates...
	I1226 13:43:55.814672    1614 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 13:43:55.835764    1614 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 13:43:55.856584    1614 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 13:43:55.877772    1614 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	W1226 13:43:55.921716    1614 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 13:43:55.922210    1614 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 13:43:55.985897    1614 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 13:43:55.986038    1614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:43:56.093602    1614 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:50 SystemTime:2023-12-26 21:43:56.081967528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:43:56.115191    1614 out.go:97] Using the docker driver based on user configuration
	I1226 13:43:56.115241    1614 start.go:298] selected driver: docker
	I1226 13:43:56.115257    1614 start.go:902] validating driver "docker" against <nil>
	I1226 13:43:56.115462    1614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:43:56.215038    1614 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:50 SystemTime:2023-12-26 21:43:56.206163387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:43:56.215221    1614 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 13:43:56.219917    1614 start_flags.go:394] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I1226 13:43:56.220078    1614 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1226 13:43:56.241248    1614 out.go:169] Using Docker Desktop driver with root privileges
	I1226 13:43:56.263183    1614 cni.go:84] Creating CNI manager for ""
	I1226 13:43:56.263231    1614 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1226 13:43:56.263258    1614 start_flags.go:323] config:
	{Name:download-only-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:43:56.284961    1614 out.go:97] Starting control plane node download-only-571000 in cluster download-only-571000
	I1226 13:43:56.285002    1614 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 13:43:56.306918    1614 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 13:43:56.307016    1614 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1226 13:43:56.307065    1614 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 13:43:56.358775    1614 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 13:43:56.359022    1614 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 13:43:56.359158    1614 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 13:43:56.364401    1614 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1226 13:43:56.364417    1614 cache.go:56] Caching tarball of preloaded images
	I1226 13:43:56.364552    1614 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1226 13:43:56.386077    1614 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1226 13:43:56.386126    1614 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:43:56.465231    1614 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1226 13:44:03.505970    1614 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:44:03.506166    1614 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:44:04.053226    1614 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1226 13:44:04.053460    1614 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/download-only-571000/config.json ...
	I1226 13:44:04.053483    1614 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/download-only-571000/config.json: {Name:mk787cf525db2e43fb5e81ead3e204918b697174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 13:44:04.053785    1614 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1226 13:44:04.054110    1614 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I1226 13:44:16.453911    1614 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-571000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (7.63330689s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-571000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-571000: exit status 85 (295.7647ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-571000 | jenkins | v1.32.0 | 26 Dec 23 13:43 PST |          |
	|         | -p download-only-571000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-571000 | jenkins | v1.32.0 | 26 Dec 23 13:44 PST |          |
	|         | -p download-only-571000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 13:44:40
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 13:44:40.240170    1669 out.go:296] Setting OutFile to fd 1 ...
	I1226 13:44:40.240469    1669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:44:40.240475    1669 out.go:309] Setting ErrFile to fd 2...
	I1226 13:44:40.240479    1669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:44:40.240672    1669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	W1226 13:44:40.240778    1669 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17857-1142/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17857-1142/.minikube/config/config.json: no such file or directory
	I1226 13:44:40.242059    1669 out.go:303] Setting JSON to true
	I1226 13:44:40.264738    1669 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":850,"bootTime":1703626230,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 13:44:40.264847    1669 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 13:44:40.286507    1669 out.go:97] [download-only-571000] minikube v1.32.0 on Darwin 14.2.1
	I1226 13:44:40.308199    1669 out.go:169] MINIKUBE_LOCATION=17857
	I1226 13:44:40.286636    1669 notify.go:220] Checking for updates...
	I1226 13:44:40.350160    1669 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 13:44:40.371303    1669 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 13:44:40.392087    1669 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 13:44:40.413239    1669 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	W1226 13:44:40.455006    1669 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 13:44:40.455413    1669 config.go:182] Loaded profile config "download-only-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1226 13:44:40.455460    1669 start.go:810] api.Load failed for download-only-571000: filestore "download-only-571000": Docker machine "download-only-571000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 13:44:40.455545    1669 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 13:44:40.455564    1669 start.go:810] api.Load failed for download-only-571000: filestore "download-only-571000": Docker machine "download-only-571000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 13:44:40.518365    1669 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 13:44:40.518482    1669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:44:40.620741    1669 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:false NGoroutines:52 SystemTime:2023-12-26 21:44:40.611154429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:44:40.642297    1669 out.go:97] Using the docker driver based on existing profile
	I1226 13:44:40.642317    1669 start.go:298] selected driver: docker
	I1226 13:44:40.642321    1669 start.go:902] validating driver "docker" against &{Name:download-only-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-571000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:44:40.642484    1669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:44:40.741527    1669 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:false NGoroutines:52 SystemTime:2023-12-26 21:44:40.732588292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:44:40.744639    1669 cni.go:84] Creating CNI manager for ""
	I1226 13:44:40.744667    1669 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 13:44:40.744679    1669 start_flags.go:323] config:
	{Name:download-only-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:44:40.766644    1669 out.go:97] Starting control plane node download-only-571000 in cluster download-only-571000
	I1226 13:44:40.766721    1669 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 13:44:40.787462    1669 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 13:44:40.787543    1669 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 13:44:40.787635    1669 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 13:44:40.839547    1669 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 13:44:40.839702    1669 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 13:44:40.839719    1669 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 13:44:40.839725    1669 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 13:44:40.839732    1669 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 13:44:40.845164    1669 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 13:44:40.845177    1669 cache.go:56] Caching tarball of preloaded images
	I1226 13:44:40.845342    1669 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 13:44:40.866681    1669 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1226 13:44:40.866708    1669 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:44:40.945304    1669 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-571000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-571000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (12.942519382s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-571000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-571000: exit status 85 (299.641638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-571000 | jenkins | v1.32.0 | 26 Dec 23 13:43 PST |          |
	|         | -p download-only-571000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-571000 | jenkins | v1.32.0 | 26 Dec 23 13:44 PST |          |
	|         | -p download-only-571000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-571000 | jenkins | v1.32.0 | 26 Dec 23 13:44 PST |          |
	|         | -p download-only-571000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 13:44:48
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 13:44:48.170743    1706 out.go:296] Setting OutFile to fd 1 ...
	I1226 13:44:48.170966    1706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:44:48.170971    1706 out.go:309] Setting ErrFile to fd 2...
	I1226 13:44:48.170976    1706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:44:48.171155    1706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	W1226 13:44:48.171253    1706 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17857-1142/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17857-1142/.minikube/config/config.json: no such file or directory
	I1226 13:44:48.172660    1706 out.go:303] Setting JSON to true
	I1226 13:44:48.195472    1706 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":858,"bootTime":1703626230,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 13:44:48.195563    1706 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 13:44:48.216967    1706 out.go:97] [download-only-571000] minikube v1.32.0 on Darwin 14.2.1
	I1226 13:44:48.237632    1706 out.go:169] MINIKUBE_LOCATION=17857
	I1226 13:44:48.217104    1706 notify.go:220] Checking for updates...
	I1226 13:44:48.279813    1706 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 13:44:48.300646    1706 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 13:44:48.321887    1706 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 13:44:48.342888    1706 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	W1226 13:44:48.384899    1706 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 13:44:48.385674    1706 config.go:182] Loaded profile config "download-only-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1226 13:44:48.385762    1706 start.go:810] api.Load failed for download-only-571000: filestore "download-only-571000": Docker machine "download-only-571000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 13:44:48.385939    1706 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 13:44:48.385980    1706 start.go:810] api.Load failed for download-only-571000: filestore "download-only-571000": Docker machine "download-only-571000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 13:44:48.442740    1706 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 13:44:48.442877    1706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:44:48.546634    1706 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:false NGoroutines:51 SystemTime:2023-12-26 21:44:48.534901248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:44:48.568578    1706 out.go:97] Using the docker driver based on existing profile
	I1226 13:44:48.568611    1706 start.go:298] selected driver: docker
	I1226 13:44:48.568621    1706 start.go:902] validating driver "docker" against &{Name:download-only-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-571000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:44:48.568877    1706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:44:48.669082    1706 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:false NGoroutines:51 SystemTime:2023-12-26 21:44:48.66008436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:44:48.672188    1706 cni.go:84] Creating CNI manager for ""
	I1226 13:44:48.672210    1706 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 13:44:48.672224    1706 start_flags.go:323] config:
	{Name:download-only-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs
:}
	I1226 13:44:48.693280    1706 out.go:97] Starting control plane node download-only-571000 in cluster download-only-571000
	I1226 13:44:48.693308    1706 cache.go:121] Beginning downloading kic base image for docker with docker
	I1226 13:44:48.714409    1706 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 13:44:48.714480    1706 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1226 13:44:48.714585    1706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 13:44:48.764823    1706 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1226 13:44:48.764853    1706 cache.go:56] Caching tarball of preloaded images
	I1226 13:44:48.765076    1706 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1226 13:44:48.766131    1706 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 13:44:48.766228    1706 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 13:44:48.766243    1706 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 13:44:48.766248    1706 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 13:44:48.766259    1706 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 13:44:48.786440    1706 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1226 13:44:48.786462    1706 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:44:48.861712    1706 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1226 13:44:54.346926    1706 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:44:54.347141    1706 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1226 13:44:54.883883    1706 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1226 13:44:54.883963    1706 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/download-only-571000/config.json ...
	I1226 13:44:54.884621    1706 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1226 13:44:54.884941    1706 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17857-1142/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-571000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.76s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-571000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.94s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-068000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-068000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-068000
--- PASS: TestDownloadOnlyKic (1.94s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-873000 --alsologtostderr --binary-mirror http://127.0.0.1:49347 --driver=docker 
aaa_download_only_test.go:307: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-873000 --alsologtostderr --binary-mirror http://127.0.0.1:49347 --driver=docker : (1.031695858s)
helpers_test.go:175: Cleaning up "binary-mirror-873000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-873000
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-914000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-914000: exit status 85 (190.12021ms)

                                                
                                                
-- stdout --
	* Profile "addons-914000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-914000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-914000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-914000: exit status 85 (211.035604ms)

                                                
                                                
-- stdout --
	* Profile "addons-914000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-914000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (219.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-914000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-914000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m39.524560811s)
--- PASS: TestAddons/Setup (219.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hlrjj" [165f0fa5-bcb3-4bf7-8501-6b00e975f846] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004114748s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-914000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-914000: (5.809150677s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.418827ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-4hn5n" [32396de7-d512-4a80-979f-14b46b435dc9] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005002471s
addons_test.go:415: (dbg) Run:  kubectl --context addons-914000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.88s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.237668ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-pcrjq" [1cb6c3c9-ed72-4799-b928-2defd3793562] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003514096s
addons_test.go:473: (dbg) Run:  kubectl --context addons-914000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-914000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.201314393s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.320839ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-914000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-914000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [28ee1bdc-121a-4c6e-bc7f-ba5e1dc494da] Pending
helpers_test.go:344: "task-pv-pod" [28ee1bdc-121a-4c6e-bc7f-ba5e1dc494da] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [28ee1bdc-121a-4c6e-bc7f-ba5e1dc494da] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.006750767s
addons_test.go:584: (dbg) Run:  kubectl --context addons-914000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-914000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-914000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-914000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-914000 delete pod task-pv-pod: (1.147921876s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-914000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-914000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-914000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e7cbeca3-bc74-458f-8bea-7b981daa3339] Pending
helpers_test.go:344: "task-pv-pod-restore" [e7cbeca3-bc74-458f-8bea-7b981daa3339] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e7cbeca3-bc74-458f-8bea-7b981daa3339] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003891907s
addons_test.go:626: (dbg) Run:  kubectl --context addons-914000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-914000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-914000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-914000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.686965526s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-914000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-914000 --alsologtostderr -v=1: (1.443419321s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-cx87l" [8cf16a50-03ae-4b7f-9659-18aefad065ef] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-cx87l" [8cf16a50-03ae-4b7f-9659-18aefad065ef] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006891378s
--- PASS: TestAddons/parallel/Headlamp (13.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-n2ktp" [5115ab3a-ecc1-46cf-b5cf-4bdcc7f61334] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005640917s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-914000
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-914000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-914000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e742aac8-786e-4360-85f0-b91889fa2611] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e742aac8-786e-4360-85f0-b91889fa2611] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e742aac8-786e-4360-85f0-b91889fa2611] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005284595s
addons_test.go:891: (dbg) Run:  kubectl --context addons-914000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 ssh "cat /opt/local-path-provisioner/pvc-6c59ca4a-c10d-40bf-89b3-952ff21618c2_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-914000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-914000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-914000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.501183543s)
--- PASS: TestAddons/parallel/LocalPath (55.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ppkd2" [9e00b4ea-e38a-43bd-a37b-f7c2e0c2a6ff] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.008424456s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-914000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-svk4h" [39dd2aa2-eef1-4fcb-a317-e5067ecac372] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004650223s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-914000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-914000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-914000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-914000: (11.014701984s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-914000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-914000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-914000
--- PASS: TestAddons/StoppedEnableDisable (11.73s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.53s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.53s)

                                                
                                    
x
+
TestErrorSpam/setup (21.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-046000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-046000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 --driver=docker : (21.440625285s)
--- PASS: TestErrorSpam/setup (21.44s)

                                                
                                    
x
+
TestErrorSpam/start (2.04s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 start --dry-run
--- PASS: TestErrorSpam/start (2.04s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (2.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 stop: (1.565938765s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-046000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-046000 stop
--- PASS: TestErrorSpam/stop (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17857-1142/.minikube/files/etc/test/nested/copy/1612/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-155000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-155000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m14.888902385s)
--- PASS: TestFunctional/serial/StartWithProxy (74.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-155000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-155000 --alsologtostderr -v=8: (36.283158535s)
functional_test.go:659: soft start took 36.283685829s for "functional-155000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-155000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 cache add registry.k8s.io/pause:3.1: (3.413226926s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 cache add registry.k8s.io/pause:3.3: (3.34382724s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 cache add registry.k8s.io/pause:latest: (2.483136046s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2205529165/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cache add minikube-local-cache-test:functional-155000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 cache add minikube-local-cache-test:functional-155000: (1.050053008s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cache delete minikube-local-cache-test:functional-155000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-155000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (383.516853ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 cache reload: (1.999687865s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 kubectl -- --context functional-155000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-155000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-155000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1226 13:53:46.230006    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.237845    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.249914    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.270123    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.310693    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.392897    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.553347    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:46.873675    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:47.514082    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:48.794338    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:51.354485    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:53:56.474648    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
E1226 13:54:06.716811    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-155000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.056084868s)
functional_test.go:757: restart took 40.056219586s for "functional-155000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-155000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 logs: (3.06759405s)
--- PASS: TestFunctional/serial/LogsCmd (3.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd740598457/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd740598457/001/logs.txt: (3.012545898s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.65s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-155000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-155000
E1226 13:54:27.197912    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-155000: exit status 115 (547.170791ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31772 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-155000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 config get cpus: exit status 14 (59.275662ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 config get cpus: exit status 14 (57.839321ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-155000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-155000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4044: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-155000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-155000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (589.872718ms)

                                                
                                                
-- stdout --
	* [functional-155000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 13:56:00.302368    3948 out.go:296] Setting OutFile to fd 1 ...
	I1226 13:56:00.302668    3948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:56:00.302674    3948 out.go:309] Setting ErrFile to fd 2...
	I1226 13:56:00.302679    3948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:56:00.302867    3948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 13:56:00.304243    3948 out.go:303] Setting JSON to false
	I1226 13:56:00.327180    3948 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1530,"bootTime":1703626230,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 13:56:00.327298    3948 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 13:56:00.349421    3948 out.go:177] * [functional-155000] minikube v1.32.0 on Darwin 14.2.1
	I1226 13:56:00.371137    3948 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 13:56:00.371143    3948 notify.go:220] Checking for updates...
	I1226 13:56:00.413225    3948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 13:56:00.434076    3948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 13:56:00.455240    3948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 13:56:00.476165    3948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 13:56:00.497061    3948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 13:56:00.519073    3948 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 13:56:00.519888    3948 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 13:56:00.577258    3948 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 13:56:00.577400    3948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:56:00.677843    3948 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:66 SystemTime:2023-12-26 21:56:00.668416465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:56:00.699569    3948 out.go:177] * Using the docker driver based on existing profile
	I1226 13:56:00.720737    3948 start.go:298] selected driver: docker
	I1226 13:56:00.720764    3948 start.go:902] validating driver "docker" against &{Name:functional-155000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-155000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:56:00.720927    3948 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 13:56:00.746696    3948 out.go:177] 
	W1226 13:56:00.767538    3948 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1226 13:56:00.788681    3948 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-155000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-155000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-155000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (557.717856ms)

                                                
                                                
-- stdout --
	* [functional-155000] minikube v1.32.0 sur Darwin 14.2.1
	  - MINIKUBE_LOCATION=17857
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 13:56:01.691209    4005 out.go:296] Setting OutFile to fd 1 ...
	I1226 13:56:01.691408    4005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:56:01.691413    4005 out.go:309] Setting ErrFile to fd 2...
	I1226 13:56:01.691417    4005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 13:56:01.691622    4005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
	I1226 13:56:01.693209    4005 out.go:303] Setting JSON to false
	I1226 13:56:01.715622    4005 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1531,"bootTime":1703626230,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1226 13:56:01.715708    4005 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 13:56:01.737462    4005 out.go:177] * [functional-155000] minikube v1.32.0 sur Darwin 14.2.1
	I1226 13:56:01.779476    4005 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 13:56:01.779555    4005 notify.go:220] Checking for updates...
	I1226 13:56:01.801500    4005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
	I1226 13:56:01.822569    4005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1226 13:56:01.844575    4005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 13:56:01.866414    4005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube
	I1226 13:56:01.887403    4005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 13:56:01.909094    4005 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 13:56:01.909829    4005 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 13:56:01.965035    4005 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I1226 13:56:01.965176    4005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 13:56:02.067144    4005 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:66 SystemTime:2023-12-26 21:56:02.057745144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I1226 13:56:02.088683    4005 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1226 13:56:02.109759    4005 start.go:298] selected driver: docker
	I1226 13:56:02.109782    4005 start.go:902] validating driver "docker" against &{Name:functional-155000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-155000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 13:56:02.109888    4005 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 13:56:02.135885    4005 out.go:177] 
	W1226 13:56:02.157721    4005 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1226 13:56:02.179640    4005 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e41079be-48f3-4b87-90a8-4a0830bb367d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005282968s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-155000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-155000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-155000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-155000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a51182ce-422b-4b1f-88ae-121d65987d32] Pending
helpers_test.go:344: "sp-pod" [a51182ce-422b-4b1f-88ae-121d65987d32] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a51182ce-422b-4b1f-88ae-121d65987d32] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004593676s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-155000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-155000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-155000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [85b81a4a-ddcb-4587-af50-f27ed3d245cc] Pending
helpers_test.go:344: "sp-pod" [85b81a4a-ddcb-4587-af50-f27ed3d245cc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [85b81a4a-ddcb-4587-af50-f27ed3d245cc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005626785s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-155000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh -n functional-155000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cp functional-155000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2264492497/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh -n functional-155000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh -n functional-155000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-155000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-6l52t" [f99a786b-bb40-45c4-97c0-b1980e954ea9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-6l52t" [f99a786b-bb40-45c4-97c0-b1980e954ea9] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.046777934s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;": exit status 1 (404.605453ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;": exit status 1 (145.513552ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;": exit status 1 (124.317775ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-155000 exec mysql-859648c796-6l52t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1612/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /etc/test/nested/copy/1612/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1612.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /etc/ssl/certs/1612.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1612.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /usr/share/ca-certificates/1612.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/16122.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /etc/ssl/certs/16122.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/16122.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /usr/share/ca-certificates/16122.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-155000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "sudo systemctl is-active crio": exit status 1 (502.353781ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-155000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-155000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-155000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-155000 image ls --format short --alsologtostderr:
I1226 13:56:15.852891    4166 out.go:296] Setting OutFile to fd 1 ...
I1226 13:56:15.853224    4166 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:15.853230    4166 out.go:309] Setting ErrFile to fd 2...
I1226 13:56:15.853236    4166 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:15.853416    4166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
I1226 13:56:15.854024    4166 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:15.854114    4166 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:15.854571    4166 cli_runner.go:164] Run: docker container inspect functional-155000 --format={{.State.Status}}
I1226 13:56:15.910016    4166 ssh_runner.go:195] Run: systemctl --version
I1226 13:56:15.910092    4166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155000
I1226 13:56:15.967008    4166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50012 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/functional-155000/id_rsa Username:docker}
I1226 13:56:16.052091    4166 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-155000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-155000 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-155000 | 4a4410bb70ee3 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/minikube-local-cache-test | functional-155000 | 1353a5f21262a | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-155000 image ls --format table --alsologtostderr:
I1226 13:56:21.079399    4231 out.go:296] Setting OutFile to fd 1 ...
I1226 13:56:21.079608    4231 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:21.079614    4231 out.go:309] Setting ErrFile to fd 2...
I1226 13:56:21.079618    4231 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:21.079790    4231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
I1226 13:56:21.080375    4231 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:21.080464    4231 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:21.080937    4231 cli_runner.go:164] Run: docker container inspect functional-155000 --format={{.State.Status}}
I1226 13:56:21.131142    4231 ssh_runner.go:195] Run: systemctl --version
I1226 13:56:21.131218    4231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155000
I1226 13:56:21.182022    4231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50012 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/functional-155000/id_rsa Username:docker}
I1226 13:56:21.268322    4231 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-155000 image ls --format json --alsologtostderr:
[{"id":"4a4410bb70ee3aa7f8198fb45a058dae4066480c032591458aa55baa9912958c","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-155000"],"size":"1240000"},{"id":"1353a5f21262afd6eeaf24509ac239db11b8a02404c96285e5a28ab206d65a47","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-155000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-155000"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests
":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.
9-0"],"size":"294000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-155000 image ls --format json --alsologtostderr:
I1226 13:56:20.793552    4225 out.go:296] Setting OutFile to fd 1 ...
I1226 13:56:20.793775    4225 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:20.793780    4225 out.go:309] Setting ErrFile to fd 2...
I1226 13:56:20.793784    4225 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:20.793969    4225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
I1226 13:56:20.794597    4225 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:20.794693    4225 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:20.795103    4225 cli_runner.go:164] Run: docker container inspect functional-155000 --format={{.State.Status}}
I1226 13:56:20.845703    4225 ssh_runner.go:195] Run: systemctl --version
I1226 13:56:20.845780    4225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155000
I1226 13:56:20.896537    4225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50012 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/functional-155000/id_rsa Username:docker}
I1226 13:56:20.979030    4225 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-155000 image ls --format yaml --alsologtostderr:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 1353a5f21262afd6eeaf24509ac239db11b8a02404c96285e5a28ab206d65a47
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-155000
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-155000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-155000 image ls --format yaml --alsologtostderr:
I1226 13:56:16.153777    4177 out.go:296] Setting OutFile to fd 1 ...
I1226 13:56:16.153996    4177 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:16.154000    4177 out.go:309] Setting ErrFile to fd 2...
I1226 13:56:16.154005    4177 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:16.154182    4177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
I1226 13:56:16.154836    4177 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:16.154929    4177 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:16.155318    4177 cli_runner.go:164] Run: docker container inspect functional-155000 --format={{.State.Status}}
I1226 13:56:16.207279    4177 ssh_runner.go:195] Run: systemctl --version
I1226 13:56:16.207353    4177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155000
I1226 13:56:16.257842    4177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50012 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/functional-155000/id_rsa Username:docker}
I1226 13:56:16.341742    4177 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh pgrep buildkitd: exit status 1 (435.096343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image build -t localhost/my-image:functional-155000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image build -t localhost/my-image:functional-155000 testdata/build --alsologtostderr: (3.629816245s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-155000 image build -t localhost/my-image:functional-155000 testdata/build --alsologtostderr:
I1226 13:56:16.879005    4193 out.go:296] Setting OutFile to fd 1 ...
I1226 13:56:16.879384    4193 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:16.879390    4193 out.go:309] Setting ErrFile to fd 2...
I1226 13:56:16.879394    4193 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 13:56:16.879572    4193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17857-1142/.minikube/bin
I1226 13:56:16.880182    4193 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:16.880789    4193 config.go:182] Loaded profile config "functional-155000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 13:56:16.881202    4193 cli_runner.go:164] Run: docker container inspect functional-155000 --format={{.State.Status}}
I1226 13:56:16.931463    4193 ssh_runner.go:195] Run: systemctl --version
I1226 13:56:16.931533    4193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155000
I1226 13:56:16.984023    4193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50012 SSHKeyPath:/Users/jenkins/minikube-integration/17857-1142/.minikube/machines/functional-155000/id_rsa Username:docker}
I1226 13:56:17.067604    4193 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3900966842.tar
I1226 13:56:17.067710    4193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1226 13:56:17.076089    4193 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3900966842.tar
I1226 13:56:17.080113    4193 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3900966842.tar: stat -c "%s %y" /var/lib/minikube/build/build.3900966842.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3900966842.tar': No such file or directory
I1226 13:56:17.080140    4193 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3900966842.tar --> /var/lib/minikube/build/build.3900966842.tar (3072 bytes)
I1226 13:56:17.101028    4193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3900966842
I1226 13:56:17.109360    4193 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3900966842 -xf /var/lib/minikube/build/build.3900966842.tar
I1226 13:56:17.118339    4193 docker.go:346] Building image: /var/lib/minikube/build/build.3900966842
I1226 13:56:17.118411    4193 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-155000 /var/lib/minikube/build/build.3900966842
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 2.4s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:4a4410bb70ee3aa7f8198fb45a058dae4066480c032591458aa55baa9912958c done
#8 naming to localhost/my-image:functional-155000 done
#8 DONE 0.0s
I1226 13:56:20.408312    4193 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-155000 /var/lib/minikube/build/build.3900966842: (3.289907447s)
I1226 13:56:20.408369    4193 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3900966842
I1226 13:56:20.418018    4193 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3900966842.tar
I1226 13:56:20.426289    4193 build_images.go:207] Built localhost/my-image:functional-155000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.3900966842.tar
I1226 13:56:20.426322    4193 build_images.go:123] succeeded building to: functional-155000
I1226 13:56:20.426327    4193 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.45466241s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-155000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-155000 docker-env) && out/minikube-darwin-amd64 status -p functional-155000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-155000 docker-env) && out/minikube-darwin-amd64 status -p functional-155000": (1.045539515s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-155000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image load --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image load --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr: (3.450144679s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image load --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image load --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr: (2.40143406s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.550260863s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-155000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image load --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image load --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr: (4.251138792s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image save gcr.io/google-containers/addon-resizer:functional-155000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image save gcr.io/google-containers/addon-resizer:functional-155000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.545194322s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image rm gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.233699015s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-155000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 image save --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-155000 image save --daemon gcr.io/google-containers/addon-resizer:functional-155000 --alsologtostderr: (1.570146111s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-155000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-155000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-155000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-gzvmk" [2783f82d-9b2f-410c-bd4f-2a2df39b3b12] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-gzvmk" [2783f82d-9b2f-410c-bd4f-2a2df39b3b12] Running
E1226 13:55:08.159905    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/addons-914000/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.005321994s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-155000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-155000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-155000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-155000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3705: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-155000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-155000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c8b41b50-02eb-4a36-8f22-4d50ed02e16f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c8b41b50-02eb-4a36-8f22-4d50ed02e16f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006951613s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 service list -o json
functional_test.go:1493: Took "664.731655ms" to run "out/minikube-darwin-amd64 -p functional-155000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 service --namespace=default --https --url hello-node: signal: killed (15.0020931s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50293

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:50293
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-155000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-155000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3735: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 service hello-node --url --format={{.IP}}: signal: killed (15.002812026s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 service hello-node --url: signal: killed (15.003869213s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50337

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:50337
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "392.280464ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "78.002707ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "389.419593ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "77.794097ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2825048999/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2825048999/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2825048999/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T" /mount1: exit status 1 (489.596153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-155000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2825048999/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2825048999/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2825048999/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.44s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-155000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-155000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-155000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-163000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-163000 --driver=docker : (21.820286108s)
--- PASS: TestImageBuild/serial/Setup (21.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-163000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-163000: (3.236644658s)
--- PASS: TestImageBuild/serial/NormalBuild (3.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-163000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-163000: (1.32867824s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-163000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-163000: (1.122762435s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-163000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-163000: (1.089308923s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (34.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-989000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1226 14:05:03.850741    1612 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17857-1142/.minikube/profiles/functional-155000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-989000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (34.076636955s)
--- PASS: TestJSONOutput/start/Command (34.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-989000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-989000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-989000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-989000 --output=json --user=testUser: (10.900330588s)
--- PASS: TestJSONOutput/stop/Command (10.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-658000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-658000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (383.964576ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"048e7575-96cf-467e-8e66-5080c148c5fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-658000] minikube v1.32.0 on Darwin 14.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"855dcfd3-f58c-4dde-8be0-ffea6ade6b82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"f9d564e8-658c-418f-a094-c739e7d574a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig"}}
	{"specversion":"1.0","id":"f2861a0d-efd4-48a7-8d1d-c6fa9d7a9ff2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2829cd6b-8038-49a3-90ff-29a3c9371c96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6a25c0ff-8232-445d-ae0f-84e530d5854c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17857-1142/.minikube"}}
	{"specversion":"1.0","id":"b2f4861c-b5f6-4949-a414-a64e6022ef58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ff6f4bc-d9ee-4e2c-823d-eed2fab4102c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-658000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-658000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-117000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-117000 --network=: (20.462202716s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-117000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-117000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-117000: (2.409876979s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-733000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-733000 --network=bridge: (20.374832381s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-733000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-733000: (2.232034721s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.66s)

                                                
                                    
x
+
TestKicExistingNetwork (23.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-118000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-118000 --network=existing-network: (21.109404059s)
helpers_test.go:175: Cleaning up "existing-network-118000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-118000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-118000: (2.254145643s)
--- PASS: TestKicExistingNetwork (23.70s)

                                                
                                    
x
+
TestKicCustomSubnet (24.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-812000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-812000 --subnet=192.168.60.0/24: (21.934996931s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-812000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-812000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-812000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-812000: (2.377044081s)
--- PASS: TestKicCustomSubnet (24.37s)

                                                
                                    
x
+
TestKicStaticIP (23.33s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-629000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-629000 --static-ip=192.168.200.200: (20.726790702s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-629000 ip
helpers_test.go:175: Cleaning up "static-ip-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-629000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-629000: (2.369563261s)
--- PASS: TestKicStaticIP (23.33s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (49.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-839000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-839000 --driver=docker : (21.156829626s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-841000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-841000 --driver=docker : (21.548093349s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-839000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-841000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-841000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-841000: (2.412132152s)
helpers_test.go:175: Cleaning up "first-839000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-839000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-839000: (2.432847711s)
--- PASS: TestMinikubeProfile (49.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-581000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-581000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.256348967s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.26s)

                                                
                                    
x
+
TestPreload (163.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-761000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m37.776488273s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-761000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-761000 image pull gcr.io/k8s-minikube/busybox: (4.646852306s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-761000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-761000: (10.864676612s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-761000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-761000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (47.543318433s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-761000 image list
helpers_test.go:175: Cleaning up "test-preload-761000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-761000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-761000: (2.521787159s)
--- PASS: TestPreload (163.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17857
- KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2648035129/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2648035129/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2648035129/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2648035129/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.71s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.68s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17857
- KUBECONFIG=/Users/jenkins/minikube-integration/17857-1142/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2524400964/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2524400964/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2524400964/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2524400964/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.68s)

                                                
                                    

Test skip (21/190)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.280458ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-pvgpl" [87819714-1c58-49c0-957d-17d7592d679f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004623622s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l5566" [abcaa8fd-468e-4a3c-8afa-1758a42857d5] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005582192s
addons_test.go:340: (dbg) Run:  kubectl --context addons-914000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-914000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-914000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.635122019s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (16.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-914000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-914000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-914000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [42702f5d-c2d6-4624-af92-88399f1333f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [42702f5d-c2d6-4624-af92-88399f1333f4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005239686s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-914000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.07s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-155000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-155000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-2gqg9" [85c7aac2-660a-43dd-8c00-615189e76aa3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-2gqg9" [85c7aac2-660a-43dd-8c00-615189e76aa3] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005904631s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2513903072/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703627758832341000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2513903072/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703627758832341000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2513903072/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703627758832341000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2513903072/001/test-1703627758832341000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.703131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.866652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (556.316529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (358.149694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.494745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.16304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (424.147968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "sudo umount -f /mount-9p": exit status 1 (386.124542ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-amd64 -p functional-155000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2513903072/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (14.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2540431251/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (383.523338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/12/26 13:56:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.749856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (444.475666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.863875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.787636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.193303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.090122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-155000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-155000 ssh "sudo umount -f /mount-9p": exit status 1 (345.776098ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-155000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-155000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2540431251/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.88s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard