Test Report: Docker_macOS 17323

                    
                      c1ea47c43b7779cefdb242dbac2fab4b02ecdc60:2023-10-02:31265
                    
                

Test fail (25/181)

x
+
TestOffline (753.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-088000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-088000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m32.573031596s)

                                                
                                                
-- stdout --
	* [offline-docker-088000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-088000 in cluster offline-docker-088000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-088000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:28:15.920764   55564 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:28:15.921057   55564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:28:15.921062   55564 out.go:309] Setting ErrFile to fd 2...
	I1002 17:28:15.921066   55564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:28:15.921237   55564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:28:15.922731   55564 out.go:303] Setting JSON to false
	I1002 17:28:15.945553   55564 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25064,"bootTime":1696267831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 17:28:15.945653   55564 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 17:28:15.967399   55564 out.go:177] * [offline-docker-088000] minikube v1.31.2 on Darwin 14.0
	I1002 17:28:16.009513   55564 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 17:28:16.010096   55564 notify.go:220] Checking for updates...
	I1002 17:28:16.052437   55564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 17:28:16.094224   55564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 17:28:16.115463   55564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 17:28:16.136443   55564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 17:28:16.157270   55564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 17:28:16.178732   55564 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 17:28:16.237357   55564 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 17:28:16.237498   55564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:28:16.376923   55564 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:150 SystemTime:2023-10-03 00:28:16.364353295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:28:16.419727   55564 out.go:177] * Using the docker driver based on user configuration
	I1002 17:28:16.440658   55564 start.go:298] selected driver: docker
	I1002 17:28:16.440698   55564 start.go:902] validating driver "docker" against <nil>
	I1002 17:28:16.440753   55564 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 17:28:16.445141   55564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:28:16.547493   55564 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:150 SystemTime:2023-10-03 00:28:16.534880923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:28:16.547674   55564 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 17:28:16.547870   55564 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 17:28:16.569279   55564 out.go:177] * Using Docker Desktop driver with root privileges
	I1002 17:28:16.590413   55564 cni.go:84] Creating CNI manager for ""
	I1002 17:28:16.590453   55564 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 17:28:16.590469   55564 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 17:28:16.590499   55564 start_flags.go:321] config:
	{Name:offline-docker-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-088000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 17:28:16.633538   55564 out.go:177] * Starting control plane node offline-docker-088000 in cluster offline-docker-088000
	I1002 17:28:16.677711   55564 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 17:28:16.699518   55564 out.go:177] * Pulling base image ...
	I1002 17:28:16.741492   55564 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:28:16.741538   55564 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 17:28:16.741550   55564 cache.go:57] Caching tarball of preloaded images
	I1002 17:28:16.741553   55564 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 17:28:16.741659   55564 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 17:28:16.741669   55564 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 17:28:16.742704   55564 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/offline-docker-088000/config.json ...
	I1002 17:28:16.743027   55564 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/offline-docker-088000/config.json: {Name:mkbbae242965b3ee9dcbdc487330716aba97e3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 17:28:16.866134   55564 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 17:28:16.866162   55564 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 17:28:16.866201   55564 cache.go:195] Successfully downloaded all kic artifacts
	I1002 17:28:16.866269   55564 start.go:365] acquiring machines lock for offline-docker-088000: {Name:mkd5b34407122606bd7650d604f8e740eaafe015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:28:16.866467   55564 start.go:369] acquired machines lock for "offline-docker-088000" in 184.408µs
	I1002 17:28:16.866504   55564 start.go:93] Provisioning new machine with config: &{Name:offline-docker-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-088000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 17:28:16.866600   55564 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:28:16.888568   55564 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:28:16.888762   55564 start.go:159] libmachine.API.Create for "offline-docker-088000" (driver="docker")
	I1002 17:28:16.888790   55564 client.go:168] LocalClient.Create starting
	I1002 17:28:16.888900   55564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:28:16.888946   55564 main.go:141] libmachine: Decoding PEM data...
	I1002 17:28:16.888964   55564 main.go:141] libmachine: Parsing certificate...
	I1002 17:28:16.889051   55564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:28:16.889082   55564 main.go:141] libmachine: Decoding PEM data...
	I1002 17:28:16.889091   55564 main.go:141] libmachine: Parsing certificate...
	I1002 17:28:16.909875   55564 cli_runner.go:164] Run: docker network inspect offline-docker-088000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:28:17.002451   55564 cli_runner.go:211] docker network inspect offline-docker-088000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:28:17.002550   55564 network_create.go:281] running [docker network inspect offline-docker-088000] to gather additional debugging logs...
	I1002 17:28:17.002566   55564 cli_runner.go:164] Run: docker network inspect offline-docker-088000
	W1002 17:28:17.054302   55564 cli_runner.go:211] docker network inspect offline-docker-088000 returned with exit code 1
	I1002 17:28:17.054333   55564 network_create.go:284] error running [docker network inspect offline-docker-088000]: docker network inspect offline-docker-088000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-088000 not found
	I1002 17:28:17.054349   55564 network_create.go:286] output of [docker network inspect offline-docker-088000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-088000 not found
	
	** /stderr **
	I1002 17:28:17.054489   55564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:28:17.108375   55564 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:28:17.108797   55564 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a95980}
	I1002 17:28:17.108814   55564 network_create.go:124] attempt to create docker network offline-docker-088000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1002 17:28:17.108876   55564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-088000 offline-docker-088000
	I1002 17:28:17.196892   55564 network_create.go:108] docker network offline-docker-088000 192.168.58.0/24 created
	I1002 17:28:17.196926   55564 kic.go:117] calculated static IP "192.168.58.2" for the "offline-docker-088000" container
	I1002 17:28:17.197036   55564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:28:17.250993   55564 cli_runner.go:164] Run: docker volume create offline-docker-088000 --label name.minikube.sigs.k8s.io=offline-docker-088000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:28:17.304514   55564 oci.go:103] Successfully created a docker volume offline-docker-088000
	I1002 17:28:17.304624   55564 cli_runner.go:164] Run: docker run --rm --name offline-docker-088000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-088000 --entrypoint /usr/bin/test -v offline-docker-088000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:28:18.103580   55564 oci.go:107] Successfully prepared a docker volume offline-docker-088000
	I1002 17:28:18.103616   55564 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:28:18.103629   55564 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:28:18.103744   55564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-088000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:34:17.031481   55564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:34:17.031617   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:17.085448   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:17.085570   55564 retry.go:31] will retry after 322.815443ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:17.409836   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:17.463735   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:17.463836   55564 retry.go:31] will retry after 358.506353ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:17.823737   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:17.879023   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:17.879124   55564 retry.go:31] will retry after 418.970783ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:18.300530   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:18.355201   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:18.355305   55564 retry.go:31] will retry after 523.741773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:18.881503   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:18.936256   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	W1002 17:34:18.936356   55564 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	
	W1002 17:34:18.936383   55564 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:18.936436   55564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:34:18.936488   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:18.988067   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:18.988161   55564 retry.go:31] will retry after 284.845215ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:19.275398   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:19.329697   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:19.329784   55564 retry.go:31] will retry after 362.638799ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:19.694841   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:19.747117   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:34:19.747226   55564 retry.go:31] will retry after 459.009171ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:20.207139   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:34:20.259103   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	W1002 17:34:20.259206   55564 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	
	W1002 17:34:20.259224   55564 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:20.259238   55564 start.go:128] duration metric: createHost completed in 6m3.250528967s
	I1002 17:34:20.259245   55564 start.go:83] releasing machines lock for "offline-docker-088000", held for 6m3.250670178s
	W1002 17:34:20.259258   55564 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1002 17:34:20.259682   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:20.310131   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:20.310186   55564 delete.go:82] Unable to get host status for offline-docker-088000, assuming it has already been deleted: state: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	W1002 17:34:20.310291   55564 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1002 17:34:20.310301   55564 start.go:703] Will try again in 5 seconds ...
	I1002 17:34:25.313667   55564 start.go:365] acquiring machines lock for offline-docker-088000: {Name:mkd5b34407122606bd7650d604f8e740eaafe015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:34:25.313860   55564 start.go:369] acquired machines lock for "offline-docker-088000" in 151.74µs
	I1002 17:34:25.313897   55564 start.go:96] Skipping create...Using existing machine configuration
	I1002 17:34:25.313912   55564 fix.go:54] fixHost starting: 
	I1002 17:34:25.314368   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:25.369441   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:25.369483   55564 fix.go:102] recreateIfNeeded on offline-docker-088000: state= err=unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:25.369520   55564 fix.go:107] machineExists: false. err=machine does not exist
	I1002 17:34:25.391283   55564 out.go:177] * docker "offline-docker-088000" container is missing, will recreate.
	I1002 17:34:25.435117   55564 delete.go:124] DEMOLISHING offline-docker-088000 ...
	I1002 17:34:25.435321   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:25.487098   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	W1002 17:34:25.487150   55564 stop.go:75] unable to get state: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:25.487178   55564 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:25.487580   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:25.537090   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:25.537141   55564 delete.go:82] Unable to get host status for offline-docker-088000, assuming it has already been deleted: state: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:25.537217   55564 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-088000
	W1002 17:34:25.587850   55564 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-088000 returned with exit code 1
	I1002 17:34:25.587885   55564 kic.go:367] could not find the container offline-docker-088000 to remove it. will try anyways
	I1002 17:34:25.587967   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:25.637821   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	W1002 17:34:25.637865   55564 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:25.637949   55564 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-088000 /bin/bash -c "sudo init 0"
	W1002 17:34:25.687621   55564 cli_runner.go:211] docker exec --privileged -t offline-docker-088000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 17:34:25.687659   55564 oci.go:647] error shutdown offline-docker-088000: docker exec --privileged -t offline-docker-088000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:26.689481   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:26.743807   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:26.743857   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:26.743870   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:26.743891   55564 retry.go:31] will retry after 544.594237ms: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:27.290988   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:27.345234   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:27.345286   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:27.345299   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:27.345323   55564 retry.go:31] will retry after 852.451295ms: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:28.198935   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:28.254460   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:28.254506   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:28.254519   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:28.254542   55564 retry.go:31] will retry after 712.463072ms: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:28.969496   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:29.023453   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:29.023497   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:29.023510   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:29.023532   55564 retry.go:31] will retry after 2.143753846s: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:31.169188   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:31.223571   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:31.223620   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:31.223633   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:31.223662   55564 retry.go:31] will retry after 1.669256423s: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:32.895455   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:32.949198   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:32.949253   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:32.949272   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:32.949294   55564 retry.go:31] will retry after 3.166530058s: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:36.118455   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:36.213131   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:36.213195   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:36.213215   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:36.213246   55564 retry.go:31] will retry after 5.213732309s: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:41.429765   55564 cli_runner.go:164] Run: docker container inspect offline-docker-088000 --format={{.State.Status}}
	W1002 17:34:41.485109   55564 cli_runner.go:211] docker container inspect offline-docker-088000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:41.485153   55564 oci.go:659] temporary error verifying shutdown: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:34:41.485164   55564 oci.go:661] temporary error: container offline-docker-088000 status is  but expect it to be exited
	I1002 17:34:41.485191   55564 oci.go:88] couldn't shut down offline-docker-088000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	 
	I1002 17:34:41.485262   55564 cli_runner.go:164] Run: docker rm -f -v offline-docker-088000
	I1002 17:34:41.535812   55564 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-088000
	W1002 17:34:41.585674   55564 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-088000 returned with exit code 1
	I1002 17:34:41.585791   55564 cli_runner.go:164] Run: docker network inspect offline-docker-088000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:34:41.636924   55564 cli_runner.go:164] Run: docker network rm offline-docker-088000
	I1002 17:34:41.736737   55564 fix.go:114] Sleeping 1 second for extra luck!
	I1002 17:34:42.737551   55564 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:34:42.760909   55564 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:34:42.761080   55564 start.go:159] libmachine.API.Create for "offline-docker-088000" (driver="docker")
	I1002 17:34:42.761114   55564 client.go:168] LocalClient.Create starting
	I1002 17:34:42.761294   55564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:34:42.761361   55564 main.go:141] libmachine: Decoding PEM data...
	I1002 17:34:42.761384   55564 main.go:141] libmachine: Parsing certificate...
	I1002 17:34:42.761444   55564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:34:42.761496   55564 main.go:141] libmachine: Decoding PEM data...
	I1002 17:34:42.761507   55564 main.go:141] libmachine: Parsing certificate...
	I1002 17:34:42.761974   55564 cli_runner.go:164] Run: docker network inspect offline-docker-088000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:34:42.815855   55564 cli_runner.go:211] docker network inspect offline-docker-088000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:34:42.815950   55564 network_create.go:281] running [docker network inspect offline-docker-088000] to gather additional debugging logs...
	I1002 17:34:42.815981   55564 cli_runner.go:164] Run: docker network inspect offline-docker-088000
	W1002 17:34:42.867011   55564 cli_runner.go:211] docker network inspect offline-docker-088000 returned with exit code 1
	I1002 17:34:42.867036   55564 network_create.go:284] error running [docker network inspect offline-docker-088000]: docker network inspect offline-docker-088000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-088000 not found
	I1002 17:34:42.867050   55564 network_create.go:286] output of [docker network inspect offline-docker-088000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-088000 not found
	
	** /stderr **
	I1002 17:34:42.867181   55564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:34:42.920101   55564 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:34:42.921712   55564 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:34:42.923139   55564 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:34:42.924725   55564 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:34:42.925179   55564 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d7fc70}
	I1002 17:34:42.925193   55564 network_create.go:124] attempt to create docker network offline-docker-088000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1002 17:34:42.925271   55564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-088000 offline-docker-088000
	I1002 17:34:43.012955   55564 network_create.go:108] docker network offline-docker-088000 192.168.85.0/24 created
	I1002 17:34:43.012988   55564 kic.go:117] calculated static IP "192.168.85.2" for the "offline-docker-088000" container
	I1002 17:34:43.013129   55564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:34:43.065993   55564 cli_runner.go:164] Run: docker volume create offline-docker-088000 --label name.minikube.sigs.k8s.io=offline-docker-088000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:34:43.116417   55564 oci.go:103] Successfully created a docker volume offline-docker-088000
	I1002 17:34:43.116544   55564 cli_runner.go:164] Run: docker run --rm --name offline-docker-088000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-088000 --entrypoint /usr/bin/test -v offline-docker-088000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:34:43.418210   55564 oci.go:107] Successfully prepared a docker volume offline-docker-088000
	I1002 17:34:43.418243   55564 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:34:43.418256   55564 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:34:43.418356   55564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-088000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:40:42.788154   55564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:40:42.788276   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:42.841593   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:42.841705   55564 retry.go:31] will retry after 291.100763ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:43.135188   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:43.191179   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:43.191277   55564 retry.go:31] will retry after 264.181498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:43.455784   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:43.510550   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:43.510659   55564 retry.go:31] will retry after 446.26336ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:43.959041   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:44.013435   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	W1002 17:40:44.013543   55564 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	
	W1002 17:40:44.013564   55564 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:44.013620   55564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:40:44.013687   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:44.064056   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:44.064152   55564 retry.go:31] will retry after 272.493009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:44.338996   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:44.411175   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:44.411273   55564 retry.go:31] will retry after 227.463487ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:44.640998   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:44.696726   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:44.696862   55564 retry.go:31] will retry after 635.977467ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:45.335275   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:45.388276   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	W1002 17:40:45.388377   55564 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	
	W1002 17:40:45.388399   55564 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:45.388410   55564 start.go:128] duration metric: createHost completed in 6m2.625908489s
	I1002 17:40:45.388471   55564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:40:45.388532   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:45.438706   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:45.438819   55564 retry.go:31] will retry after 239.37024ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:45.680585   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:45.732403   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:45.732500   55564 retry.go:31] will retry after 550.475694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:46.285415   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:46.338161   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:46.338251   55564 retry.go:31] will retry after 356.618814ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:46.696607   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:46.748953   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	W1002 17:40:46.749044   55564 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	
	W1002 17:40:46.749066   55564 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:46.749119   55564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:40:46.749180   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:46.799528   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:46.799623   55564 retry.go:31] will retry after 364.034941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:47.165417   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:47.218092   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:47.218180   55564 retry.go:31] will retry after 476.236696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:47.696875   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:47.750765   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	I1002 17:40:47.750863   55564 retry.go:31] will retry after 680.605462ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:48.433874   55564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000
	W1002 17:40:48.487276   55564 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000 returned with exit code 1
	W1002 17:40:48.487390   55564 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	
	W1002 17:40:48.487407   55564 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-088000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-088000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000
	I1002 17:40:48.487422   55564 fix.go:56] fixHost completed within 6m23.146995084s
	I1002 17:40:48.487436   55564 start.go:83] releasing machines lock for "offline-docker-088000", held for 6m23.147046523s
	W1002 17:40:48.487524   55564 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-088000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-088000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 17:40:48.531160   55564 out.go:177] 
	W1002 17:40:48.552975   55564 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 17:40:48.553025   55564 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1002 17:40:48.553054   55564 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1002 17:40:48.574984   55564 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-088000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-10-02 17:40:48.631774 -0700 PDT m=+5896.395010029
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-088000
helpers_test.go:235: (dbg) docker inspect offline-docker-088000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-088000",
	        "Id": "3f13f3efca1f300c0a3de19149573db4515c458968fa51118df51e266fc05b50",
	        "Created": "2023-10-03T00:34:42.970894932Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-088000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-088000 -n offline-docker-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-088000 -n offline-docker-088000: exit status 7 (94.714717ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:40:48.781309   56109 status.go:249] status error: host: state: unknown state "offline-docker-088000": docker container inspect offline-docker-088000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-088000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-088000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-088000
--- FAIL: TestOffline (753.45s)

                                                
                                    
x
+
TestCertOptions (7200.708s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-876000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1002 17:55:16.835452   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:55:21.744235   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:55:33.775538   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 18:00:21.764915   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 18:00:33.798243   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (9m7s)
	TestCertOptions (8m37s)
	TestNetworkPlugins (34m16s)
	TestNetworkPlugins/group (34m16s)

                                                
                                                
goroutine 2048 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0005036c0, 0xc000b9fb80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0008701e0?, {0x4c1dc80, 0x2a, 0x2a}, {0x10b00a5?, 0xc000068180?, 0x4c3f380?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0008701e0)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00013d480)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 476 [syscall, 8 minutes]:
syscall.syscall6(0x1010585?, 0xc000a9f8f8?, 0xc000a9f7e8?, 0xc000a9f918?, 0x100c000a9f8e0?, 0x1000000000003?, 0x4c1641c0?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000a9f890?, 0x1010905?, 0x90?, 0x3071ea0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0014bc7f0?, 0xc000a9f8c4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001284120)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000e38580)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000e07040?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000e07040, 0xc000e38580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertOptions(0xc000e07040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x40e
testing.tRunner(0xc000e07040, 0x34d5ee0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 938 [chan send, 112 minutes]:
os/exec.(*Cmd).watchCtx(0xc0012a0420, 0xc000fd59e0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 937
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 23 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.100.1/klog.go:1141 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 22
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.100.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1666 [chan receive, 34 minutes]:
testing.(*T).Run(0xc0014be340, {0x3103d60?, 0x15d47ccc210e?}, 0xc000e9a150)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014be340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0014be340, 0x34d5fc0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 784 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 783
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1721 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014be820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014be820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0014be820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:156 +0x86
testing.tRunner(0xc0014be820, 0x34d6010)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2026 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4c408380, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0014cc420?, 0xc000860af2?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014cc420, {0xc000860af2, 0x50e, 0x50e})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000862058, {0xc000860af2?, 0xc000507668?, 0xc000507668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000de05a0, {0x39259c0, 0xc000862058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000de05a0}, {0x39259c0, 0xc000862058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc001112300?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 477
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 559 [IO wait, 116 minutes]:
internal/poll.runtime_pollWait(0x4c408b40, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00120ea80?, 0x4c6b128?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00120ea80)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00120ea80)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000e417c0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc000e417c0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0007514a0, {0x393c820, 0xc000e417c0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0007514a0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000007a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 556
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x13a

                                                
                                                
goroutine 2027 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4c408478, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0014cc4e0?, 0xc000161063?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014cc4e0, {0xc000161063, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000862070, {0xc000161063?, 0xc001424e68?, 0xc001424e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000de05d0, {0x39259c0, 0xc000862070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000de05d0}, {0x39259c0, 0xc000862070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc001112180?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 477
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1745 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001052b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001052b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001052b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc001052b60, 0xc000454280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1749 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001053520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001053520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001053520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc001053520, 0xc000454480)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 783 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39492d8, 0xc000aa6060}, 0xc000e34750, 0xc001145e90?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x39492d8, 0xc000aa6060}, 0x0?, 0x0?, 0xc000e347b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39492d8?, 0xc000aa6060?}, 0x0?, 0x1137540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000e347d0?, 0x117c287?, 0xc001167b60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 149 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00088ef60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 123
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 150 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009639c0, 0xc000aa6060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 123
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cache.go:122 +0x594

                                                
                                                
goroutine 153 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0009638d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3922390?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00088ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009639c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3926ee0, 0xc000899560}, 0x1, 0xc000aa6060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 150
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 154 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x39492d8, 0xc000aa6060}, 0xc000087750, 0x5ad?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x39492d8, 0xc000aa6060}, 0x70?, 0x1a757000087758?, 0x651b4c1b?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x39492d8?, 0xc000aa6060?}, 0xc0001031e0?, 0x1137540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1138405?, 0xc0001031e0?, 0x34d5f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 150
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 155 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1238 [select, 111 minutes]:
net/http.(*persistConn).writeLoop(0xc0010f8b40)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1223
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 477 [syscall, 9 minutes]:
syscall.syscall6(0x1010585?, 0xc000a9da98?, 0xc000a9d988?, 0xc000a9dab8?, 0x100c000a9da80?, 0x1000000000003?, 0x537a070?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000a9da30?, 0x1010905?, 0x90?, 0x3071ea0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc0012b8420?, 0xc000a9da64, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0012843f0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0010b8160)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000e071e0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000e071e0, 0xc0010b8160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertExpiration(0xc000e071e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2d7
testing.tRunner(0xc000e071e0, 0x34d5ed8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1102 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ee9080, 0xc000aa6f60)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1101
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1751 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001053860)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001053860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001053860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc001053860, 0xc000454580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1135 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00109ac60, 0xc000aa7d40)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 689
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1734 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e06340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e06340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0014be340?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000e06340, 0x34d6008)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1147 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00103d080, 0xc0008978c0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1146
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1723 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014bf380)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014bf380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0014bf380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:305 +0xb4
testing.tRunner(0xc0014bf380, 0x34d5fa8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2028 [select, 9 minutes]:
os/exec.(*Cmd).watchCtx(0xc0010b8160, 0xc000fd4300)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 477
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1237 [select, 111 minutes]:
net/http.(*persistConn).readLoop(0xc0010f8b40)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1223
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 782 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00085cd10, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3922390?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0008470e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00085cd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001429788?, {0x3926ee0, 0xc0010df9e0}, 0x1, 0xc000aa6060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x10446bc?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0014297d0?, 0x117c287?, 0xc001167800?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2032 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4c408a48, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0010da5a0?, 0xc000e3e2e6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0010da5a0, {0xc000e3e2e6, 0x51a, 0x51a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00062a1e0, {0xc000e3e2e6?, 0xc00088ea80?, 0xc000505668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000e6e390, {0x39259c0, 0xc00062a1e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000e6e390}, {0x39259c0, 0xc00062a1e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0011124e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 476
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1748 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0010531e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0010531e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0010531e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0010531e0, 0xc000454400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1750 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0010536c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0010536c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0010536c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0010536c0, 0xc000454500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 801 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00085cd40, 0xc000aa6060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 702
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/transport/cache.go:122 +0x594

                                                
                                                
goroutine 800 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000847200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 702
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.2/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1747 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001052ea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001052ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001052ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc001052ea0, 0xc000454380)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1744 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0010524e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0010524e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0010524e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0010524e0, 0xc000454180)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1743 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001052000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001052000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001052000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc001052000, 0xc000454000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1722 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014beea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014beea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0014beea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:228 +0x39
testing.tRunner(0xc0014beea0, 0x34d5f90)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1742 [chan receive, 34 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000603860, 0xc000e9a150)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1666
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1746 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc001052d00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001052d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001052d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc001052d00, 0xc000454300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1668 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014beb60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014beb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0014beb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0014beb60, 0x34d5fd8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2050 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000e38580, 0xc000fd4360)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 476
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2049 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4c407ea8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc0010da660?, 0xc000d42063?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0010da660, {0xc000d42063, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00062a218, {0xc000d42063?, 0xc000000000?, 0xc000e35e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000e6e3c0, {0x39259c0, 0xc00062a218})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3925a40, 0xc000e6e3c0}, {0x39259c0, 0xc00062a218}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc001112300?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 476
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1667 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0014be9c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014be9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0014be9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0014be9c0, 0x34d5fc8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1720 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc000ba75e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000e07a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000e07a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc000e07a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc000e07a00, 0x34d5fe8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                    
x
+
TestDockerFlags (754.3s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-637000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1002 17:45:21.703459   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:45:33.734274   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:50:04.779030   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:50:21.724331   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:50:33.756216   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-637000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m32.984169167s)

                                                
                                                
-- stdout --
	* [docker-flags-637000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-637000 in cluster docker-flags-637000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-637000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:41:21.216570   56248 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:41:21.237140   56248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:41:21.237159   56248 out.go:309] Setting ErrFile to fd 2...
	I1002 17:41:21.237169   56248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:41:21.237577   56248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:41:21.240597   56248 out.go:303] Setting JSON to false
	I1002 17:41:21.262751   56248 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25850,"bootTime":1696267831,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 17:41:21.263481   56248 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 17:41:21.286092   56248 out.go:177] * [docker-flags-637000] minikube v1.31.2 on Darwin 14.0
	I1002 17:41:21.350960   56248 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 17:41:21.329217   56248 notify.go:220] Checking for updates...
	I1002 17:41:21.393829   56248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 17:41:21.414825   56248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 17:41:21.436853   56248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 17:41:21.459052   56248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 17:41:21.480838   56248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 17:41:21.502432   56248 config.go:182] Loaded profile config "force-systemd-flag-020000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 17:41:21.502583   56248 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 17:41:21.559881   56248 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 17:41:21.560027   56248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:41:21.658443   56248 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:false NGoroutines:200 SystemTime:2023-10-03 00:41:21.64610861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:41:21.702211   56248 out.go:177] * Using the docker driver based on user configuration
	I1002 17:41:21.723139   56248 start.go:298] selected driver: docker
	I1002 17:41:21.723169   56248 start.go:902] validating driver "docker" against <nil>
	I1002 17:41:21.723185   56248 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 17:41:21.727621   56248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:41:21.825912   56248 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:false NGoroutines:200 SystemTime:2023-10-03 00:41:21.813401672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:41:21.826078   56248 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 17:41:21.826318   56248 start_flags.go:918] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1002 17:41:21.847317   56248 out.go:177] * Using Docker Desktop driver with root privileges
	I1002 17:41:21.868410   56248 cni.go:84] Creating CNI manager for ""
	I1002 17:41:21.868450   56248 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 17:41:21.868468   56248 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 17:41:21.868485   56248 start_flags.go:321] config:
	{Name:docker-flags-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-637000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s}
	I1002 17:41:21.911354   56248 out.go:177] * Starting control plane node docker-flags-637000 in cluster docker-flags-637000
	I1002 17:41:21.932241   56248 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 17:41:21.953265   56248 out.go:177] * Pulling base image ...
	I1002 17:41:21.975479   56248 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:41:21.975565   56248 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 17:41:21.975585   56248 cache.go:57] Caching tarball of preloaded images
	I1002 17:41:21.975576   56248 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 17:41:21.975768   56248 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 17:41:21.975791   56248 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 17:41:21.975931   56248 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/docker-flags-637000/config.json ...
	I1002 17:41:21.975973   56248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/docker-flags-637000/config.json: {Name:mk9903268e79a30455e430ea20aba73c870d099c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 17:41:22.027663   56248 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 17:41:22.027681   56248 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 17:41:22.027707   56248 cache.go:195] Successfully downloaded all kic artifacts
	I1002 17:41:22.027763   56248 start.go:365] acquiring machines lock for docker-flags-637000: {Name:mkf3a235d098757ea80381516af681a208001645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:41:22.027921   56248 start.go:369] acquired machines lock for "docker-flags-637000" in 144.953µs
	I1002 17:41:22.027948   56248 start.go:93] Provisioning new machine with config: &{Name:docker-flags-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-637000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 17:41:22.028019   56248 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:41:22.070284   56248 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:41:22.070719   56248 start.go:159] libmachine.API.Create for "docker-flags-637000" (driver="docker")
	I1002 17:41:22.070774   56248 client.go:168] LocalClient.Create starting
	I1002 17:41:22.070985   56248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:41:22.071073   56248 main.go:141] libmachine: Decoding PEM data...
	I1002 17:41:22.071106   56248 main.go:141] libmachine: Parsing certificate...
	I1002 17:41:22.071222   56248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:41:22.071285   56248 main.go:141] libmachine: Decoding PEM data...
	I1002 17:41:22.071300   56248 main.go:141] libmachine: Parsing certificate...
	I1002 17:41:22.072286   56248 cli_runner.go:164] Run: docker network inspect docker-flags-637000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:41:22.123338   56248 cli_runner.go:211] docker network inspect docker-flags-637000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:41:22.123443   56248 network_create.go:281] running [docker network inspect docker-flags-637000] to gather additional debugging logs...
	I1002 17:41:22.123461   56248 cli_runner.go:164] Run: docker network inspect docker-flags-637000
	W1002 17:41:22.174814   56248 cli_runner.go:211] docker network inspect docker-flags-637000 returned with exit code 1
	I1002 17:41:22.174841   56248 network_create.go:284] error running [docker network inspect docker-flags-637000]: docker network inspect docker-flags-637000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-637000 not found
	I1002 17:41:22.174858   56248 network_create.go:286] output of [docker network inspect docker-flags-637000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-637000 not found
	
	** /stderr **
	I1002 17:41:22.175022   56248 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:41:22.227434   56248 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:41:22.229059   56248 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:41:22.229457   56248 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000df2d10}
	I1002 17:41:22.229485   56248 network_create.go:124] attempt to create docker network docker-flags-637000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1002 17:41:22.229558   56248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-637000 docker-flags-637000
	W1002 17:41:22.280626   56248 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-637000 docker-flags-637000 returned with exit code 1
	W1002 17:41:22.280658   56248 network_create.go:149] failed to create docker network docker-flags-637000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-637000 docker-flags-637000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1002 17:41:22.280684   56248 network_create.go:116] failed to create docker network docker-flags-637000 192.168.67.0/24, will retry: subnet is taken
	I1002 17:41:22.282247   56248 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:41:22.282627   56248 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f952a0}
	I1002 17:41:22.282640   56248 network_create.go:124] attempt to create docker network docker-flags-637000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1002 17:41:22.282712   56248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-637000 docker-flags-637000
	I1002 17:41:22.370368   56248 network_create.go:108] docker network docker-flags-637000 192.168.76.0/24 created
	I1002 17:41:22.370409   56248 kic.go:117] calculated static IP "192.168.76.2" for the "docker-flags-637000" container
	I1002 17:41:22.370522   56248 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:41:22.424131   56248 cli_runner.go:164] Run: docker volume create docker-flags-637000 --label name.minikube.sigs.k8s.io=docker-flags-637000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:41:22.476521   56248 oci.go:103] Successfully created a docker volume docker-flags-637000
	I1002 17:41:22.476646   56248 cli_runner.go:164] Run: docker run --rm --name docker-flags-637000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-637000 --entrypoint /usr/bin/test -v docker-flags-637000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:41:22.880070   56248 oci.go:107] Successfully prepared a docker volume docker-flags-637000
	I1002 17:41:22.880104   56248 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:41:22.880119   56248 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:41:22.880241   56248 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-637000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:47:22.096962   56248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:47:22.097099   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:22.150059   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:47:22.150183   56248 retry.go:31] will retry after 242.748229ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:22.395365   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:22.448524   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:47:22.448610   56248 retry.go:31] will retry after 449.730461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:22.899090   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:22.952586   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:47:22.952673   56248 retry.go:31] will retry after 737.182078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:23.690796   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:23.744604   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	W1002 17:47:23.744701   56248 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	W1002 17:47:23.744727   56248 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:23.744789   56248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:47:23.744841   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:23.794911   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:47:23.795035   56248 retry.go:31] will retry after 199.328897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:23.996708   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:24.049129   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:47:24.049215   56248 retry.go:31] will retry after 385.854833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:24.436388   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:24.491540   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:47:24.491651   56248 retry.go:31] will retry after 575.793464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:25.068195   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:47:25.123646   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	W1002 17:47:25.123747   56248 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	W1002 17:47:25.123770   56248 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:25.123790   56248 start.go:128] duration metric: createHost completed in 6m3.07086518s
	I1002 17:47:25.123798   56248 start.go:83] releasing machines lock for "docker-flags-637000", held for 6m3.070974977s
	W1002 17:47:25.123811   56248 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1002 17:47:25.124268   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:25.173891   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:25.173938   56248 delete.go:82] Unable to get host status for docker-flags-637000, assuming it has already been deleted: state: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	W1002 17:47:25.174024   56248 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1002 17:47:25.174039   56248 start.go:703] Will try again in 5 seconds ...
	I1002 17:47:30.176373   56248 start.go:365] acquiring machines lock for docker-flags-637000: {Name:mkf3a235d098757ea80381516af681a208001645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:47:30.176575   56248 start.go:369] acquired machines lock for "docker-flags-637000" in 158.184µs
	I1002 17:47:30.176619   56248 start.go:96] Skipping create...Using existing machine configuration
	I1002 17:47:30.176635   56248 fix.go:54] fixHost starting: 
	I1002 17:47:30.177174   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:30.231089   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:30.231129   56248 fix.go:102] recreateIfNeeded on docker-flags-637000: state= err=unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:30.231150   56248 fix.go:107] machineExists: false. err=machine does not exist
	I1002 17:47:30.253226   56248 out.go:177] * docker "docker-flags-637000" container is missing, will recreate.
	I1002 17:47:30.296597   56248 delete.go:124] DEMOLISHING docker-flags-637000 ...
	I1002 17:47:30.296804   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:30.348758   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	W1002 17:47:30.348804   56248 stop.go:75] unable to get state: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:30.348827   56248 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:30.349184   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:30.399266   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:30.399332   56248 delete.go:82] Unable to get host status for docker-flags-637000, assuming it has already been deleted: state: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:30.399409   56248 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-637000
	W1002 17:47:30.449240   56248 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-637000 returned with exit code 1
	I1002 17:47:30.449280   56248 kic.go:367] could not find the container docker-flags-637000 to remove it. will try anyways
	I1002 17:47:30.449355   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:30.499405   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	W1002 17:47:30.499455   56248 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:30.499563   56248 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-637000 /bin/bash -c "sudo init 0"
	W1002 17:47:30.551025   56248 cli_runner.go:211] docker exec --privileged -t docker-flags-637000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 17:47:30.551062   56248 oci.go:647] error shutdown docker-flags-637000: docker exec --privileged -t docker-flags-637000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:31.553512   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:31.606526   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:31.606568   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:31.606585   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:31.606606   56248 retry.go:31] will retry after 331.040974ms: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:31.938074   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:31.992380   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:31.992433   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:31.992447   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:31.992468   56248 retry.go:31] will retry after 806.67196ms: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:32.800663   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:32.854842   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:32.854890   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:32.854908   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:32.854929   56248 retry.go:31] will retry after 1.254051913s: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:34.111557   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:34.164115   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:34.164159   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:34.164173   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:34.164194   56248 retry.go:31] will retry after 2.352958253s: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:36.517723   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:36.570799   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:36.570842   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:36.570857   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:36.570878   56248 retry.go:31] will retry after 1.643726148s: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:38.217185   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:38.271771   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:38.271812   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:38.271826   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:38.271846   56248 retry.go:31] will retry after 5.021800861s: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:43.294205   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:43.348118   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:43.348163   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:43.348175   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:43.348198   56248 retry.go:31] will retry after 3.28367639s: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:46.634450   56248 cli_runner.go:164] Run: docker container inspect docker-flags-637000 --format={{.State.Status}}
	W1002 17:47:46.691095   56248 cli_runner.go:211] docker container inspect docker-flags-637000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:46.691140   56248 oci.go:659] temporary error verifying shutdown: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:47:46.691150   56248 oci.go:661] temporary error: container docker-flags-637000 status is  but expect it to be exited
	I1002 17:47:46.691179   56248 oci.go:88] couldn't shut down docker-flags-637000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	 
	I1002 17:47:46.691251   56248 cli_runner.go:164] Run: docker rm -f -v docker-flags-637000
	I1002 17:47:46.744428   56248 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-637000
	W1002 17:47:46.794633   56248 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-637000 returned with exit code 1
	I1002 17:47:46.794750   56248 cli_runner.go:164] Run: docker network inspect docker-flags-637000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:47:46.845993   56248 cli_runner.go:164] Run: docker network rm docker-flags-637000
	I1002 17:47:46.943118   56248 fix.go:114] Sleeping 1 second for extra luck!
	I1002 17:47:47.944957   56248 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:47:47.968212   56248 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:47:47.968406   56248 start.go:159] libmachine.API.Create for "docker-flags-637000" (driver="docker")
	I1002 17:47:47.968457   56248 client.go:168] LocalClient.Create starting
	I1002 17:47:47.968669   56248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:47:47.968757   56248 main.go:141] libmachine: Decoding PEM data...
	I1002 17:47:47.968795   56248 main.go:141] libmachine: Parsing certificate...
	I1002 17:47:47.968882   56248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:47:47.968947   56248 main.go:141] libmachine: Decoding PEM data...
	I1002 17:47:47.968979   56248 main.go:141] libmachine: Parsing certificate...
	I1002 17:47:47.989990   56248 cli_runner.go:164] Run: docker network inspect docker-flags-637000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:47:48.041646   56248 cli_runner.go:211] docker network inspect docker-flags-637000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:47:48.041748   56248 network_create.go:281] running [docker network inspect docker-flags-637000] to gather additional debugging logs...
	I1002 17:47:48.041767   56248 cli_runner.go:164] Run: docker network inspect docker-flags-637000
	W1002 17:47:48.091452   56248 cli_runner.go:211] docker network inspect docker-flags-637000 returned with exit code 1
	I1002 17:47:48.091488   56248 network_create.go:284] error running [docker network inspect docker-flags-637000]: docker network inspect docker-flags-637000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-637000 not found
	I1002 17:47:48.091503   56248 network_create.go:286] output of [docker network inspect docker-flags-637000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-637000 not found
	
	** /stderr **
	I1002 17:47:48.091637   56248 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:47:48.143574   56248 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:48.144970   56248 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:48.146524   56248 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:48.148119   56248 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:48.149710   56248 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:48.150068   56248 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000eadb90}
	I1002 17:47:48.150080   56248 network_create.go:124] attempt to create docker network docker-flags-637000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1002 17:47:48.150147   56248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-637000 docker-flags-637000
	I1002 17:47:48.237432   56248 network_create.go:108] docker network docker-flags-637000 192.168.94.0/24 created
	I1002 17:47:48.237465   56248 kic.go:117] calculated static IP "192.168.94.2" for the "docker-flags-637000" container
	I1002 17:47:48.237560   56248 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:47:48.290577   56248 cli_runner.go:164] Run: docker volume create docker-flags-637000 --label name.minikube.sigs.k8s.io=docker-flags-637000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:47:48.341708   56248 oci.go:103] Successfully created a docker volume docker-flags-637000
	I1002 17:47:48.341823   56248 cli_runner.go:164] Run: docker run --rm --name docker-flags-637000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-637000 --entrypoint /usr/bin/test -v docker-flags-637000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:47:48.664558   56248 oci.go:107] Successfully prepared a docker volume docker-flags-637000
	I1002 17:47:48.664586   56248 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:47:48.664597   56248 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:47:48.664709   56248 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-637000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:53:47.993847   56248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:53:47.993977   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:48.050219   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:48.050322   56248 retry.go:31] will retry after 296.725082ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:48.349551   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:48.403944   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:48.404059   56248 retry.go:31] will retry after 464.922187ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:48.869956   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:48.920495   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:48.920599   56248 retry.go:31] will retry after 622.51852ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:49.544486   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:49.596683   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	W1002 17:53:49.596784   56248 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	W1002 17:53:49.596808   56248 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:49.596860   56248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:53:49.596927   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:49.647014   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:49.647109   56248 retry.go:31] will retry after 234.59389ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:49.884122   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:49.937697   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:49.937816   56248 retry.go:31] will retry after 510.665882ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:50.450839   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:50.505501   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:50.505592   56248 retry.go:31] will retry after 827.649936ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:51.335645   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:51.388991   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	W1002 17:53:51.389100   56248 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	W1002 17:53:51.389120   56248 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:51.389137   56248 start.go:128] duration metric: createHost completed in 6m3.419195757s
	I1002 17:53:51.389201   56248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:53:51.389260   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:51.439298   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:51.439384   56248 retry.go:31] will retry after 361.336485ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:51.801677   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:51.854243   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:51.854329   56248 retry.go:31] will retry after 314.453129ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:52.171256   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:52.226508   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:52.226592   56248 retry.go:31] will retry after 377.981791ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:52.606891   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:52.665352   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	W1002 17:53:52.665450   56248 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	W1002 17:53:52.665471   56248 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:52.665538   56248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:53:52.665598   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:52.715221   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:52.715310   56248 retry.go:31] will retry after 335.59589ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:53.053336   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:53.107152   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:53.107241   56248 retry.go:31] will retry after 508.97678ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:53.618708   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:53.672079   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	I1002 17:53:53.672163   56248 retry.go:31] will retry after 289.857942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:53.964064   56248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000
	W1002 17:53:54.015947   56248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000 returned with exit code 1
	W1002 17:53:54.016045   56248 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	W1002 17:53:54.016072   56248 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-637000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-637000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	I1002 17:53:54.016085   56248 fix.go:56] fixHost completed within 6m23.812982173s
	I1002 17:53:54.016094   56248 start.go:83] releasing machines lock for "docker-flags-637000", held for 6m23.813033494s
	W1002 17:53:54.016171   56248 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-637000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-637000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 17:53:54.062440   56248 out.go:177] 
	W1002 17:53:54.083731   56248 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 17:53:54.083781   56248 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1002 17:53:54.083849   56248 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1002 17:53:54.129401   56248 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-637000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (193.959102ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-637000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-637000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (187.204518ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-637000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-637000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-10-02 17:53:54.565798 -0700 PDT m=+6682.275074293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-637000
helpers_test.go:235: (dbg) docker inspect docker-flags-637000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-637000",
	        "Id": "5759f83728e63bda0106b5a4853f61fb6763c656522ad5ba6499425fe72c3c52",
	        "Created": "2023-10-03T00:47:48.195603607Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-637000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-637000 -n docker-flags-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-637000 -n docker-flags-637000: exit status 7 (94.719331ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:53:54.715899   56706 status.go:249] status error: host: state: unknown state "docker-flags-637000": docker container inspect docker-flags-637000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-637000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-637000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-637000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-637000
--- FAIL: TestDockerFlags (754.30s)

                                                
                                    
x
+
TestForceSystemdFlag (755.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-020000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-020000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.503438831s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-020000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-020000 in cluster force-systemd-flag-020000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-020000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:40:49.537025   56133 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:40:49.537304   56133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:40:49.537309   56133 out.go:309] Setting ErrFile to fd 2...
	I1002 17:40:49.537313   56133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:40:49.537507   56133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:40:49.538995   56133 out.go:303] Setting JSON to false
	I1002 17:40:49.561071   56133 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25818,"bootTime":1696267831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 17:40:49.561213   56133 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 17:40:49.582719   56133 out.go:177] * [force-systemd-flag-020000] minikube v1.31.2 on Darwin 14.0
	I1002 17:40:49.625698   56133 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 17:40:49.625779   56133 notify.go:220] Checking for updates...
	I1002 17:40:49.669490   56133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 17:40:49.712385   56133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 17:40:49.733481   56133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 17:40:49.754656   56133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 17:40:49.798406   56133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 17:40:49.820511   56133 config.go:182] Loaded profile config "force-systemd-env-153000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 17:40:49.820701   56133 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 17:40:49.880529   56133 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 17:40:49.880671   56133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:40:49.979103   56133 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:false NGoroutines:190 SystemTime:2023-10-03 00:40:49.967008007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:40:50.001222   56133 out.go:177] * Using the docker driver based on user configuration
	I1002 17:40:50.022984   56133 start.go:298] selected driver: docker
	I1002 17:40:50.023014   56133 start.go:902] validating driver "docker" against <nil>
	I1002 17:40:50.023028   56133 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 17:40:50.027320   56133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:40:50.127459   56133 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:false NGoroutines:190 SystemTime:2023-10-03 00:40:50.115283384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=uncon
fined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Man
ages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:40:50.127663   56133 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 17:40:50.127847   56133 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 17:40:50.149049   56133 out.go:177] * Using Docker Desktop driver with root privileges
	I1002 17:40:50.171097   56133 cni.go:84] Creating CNI manager for ""
	I1002 17:40:50.171134   56133 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 17:40:50.171155   56133 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 17:40:50.171179   56133 start_flags.go:321] config:
	{Name:force-systemd-flag-020000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-020000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 17:40:50.214995   56133 out.go:177] * Starting control plane node force-systemd-flag-020000 in cluster force-systemd-flag-020000
	I1002 17:40:50.237167   56133 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 17:40:50.258831   56133 out.go:177] * Pulling base image ...
	I1002 17:40:50.302030   56133 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:40:50.302109   56133 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 17:40:50.302140   56133 cache.go:57] Caching tarball of preloaded images
	I1002 17:40:50.302134   56133 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 17:40:50.302340   56133 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 17:40:50.302363   56133 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 17:40:50.302506   56133 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/force-systemd-flag-020000/config.json ...
	I1002 17:40:50.302574   56133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/force-systemd-flag-020000/config.json: {Name:mk57e46f773041019c23d53aa3935705663f8d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 17:40:50.356380   56133 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 17:40:50.356398   56133 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 17:40:50.356425   56133 cache.go:195] Successfully downloaded all kic artifacts
	I1002 17:40:50.356480   56133 start.go:365] acquiring machines lock for force-systemd-flag-020000: {Name:mke77e6777a3733bf393c12a7b381d3b53a60ecf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:40:50.356628   56133 start.go:369] acquired machines lock for "force-systemd-flag-020000" in 134.793µs
	I1002 17:40:50.356655   56133 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-020000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-020000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 17:40:50.356716   56133 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:40:50.379927   56133 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:40:50.380323   56133 start.go:159] libmachine.API.Create for "force-systemd-flag-020000" (driver="docker")
	I1002 17:40:50.380365   56133 client.go:168] LocalClient.Create starting
	I1002 17:40:50.380487   56133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:40:50.380570   56133 main.go:141] libmachine: Decoding PEM data...
	I1002 17:40:50.380603   56133 main.go:141] libmachine: Parsing certificate...
	I1002 17:40:50.380724   56133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:40:50.380796   56133 main.go:141] libmachine: Decoding PEM data...
	I1002 17:40:50.380825   56133 main.go:141] libmachine: Parsing certificate...
	I1002 17:40:50.381495   56133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-020000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:40:50.433524   56133 cli_runner.go:211] docker network inspect force-systemd-flag-020000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:40:50.433618   56133 network_create.go:281] running [docker network inspect force-systemd-flag-020000] to gather additional debugging logs...
	I1002 17:40:50.433636   56133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-020000
	W1002 17:40:50.484486   56133 cli_runner.go:211] docker network inspect force-systemd-flag-020000 returned with exit code 1
	I1002 17:40:50.484517   56133 network_create.go:284] error running [docker network inspect force-systemd-flag-020000]: docker network inspect force-systemd-flag-020000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-020000 not found
	I1002 17:40:50.484531   56133 network_create.go:286] output of [docker network inspect force-systemd-flag-020000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-020000 not found
	
	** /stderr **
	I1002 17:40:50.484662   56133 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:40:50.536901   56133 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:40:50.537453   56133 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f238c0}
	I1002 17:40:50.537471   56133 network_create.go:124] attempt to create docker network force-systemd-flag-020000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1002 17:40:50.537540   56133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-020000 force-systemd-flag-020000
	I1002 17:40:50.624334   56133 network_create.go:108] docker network force-systemd-flag-020000 192.168.58.0/24 created
	I1002 17:40:50.624382   56133 kic.go:117] calculated static IP "192.168.58.2" for the "force-systemd-flag-020000" container
	I1002 17:40:50.624499   56133 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:40:50.678312   56133 cli_runner.go:164] Run: docker volume create force-systemd-flag-020000 --label name.minikube.sigs.k8s.io=force-systemd-flag-020000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:40:50.729373   56133 oci.go:103] Successfully created a docker volume force-systemd-flag-020000
	I1002 17:40:50.729494   56133 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-020000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-020000 --entrypoint /usr/bin/test -v force-systemd-flag-020000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:40:51.130378   56133 oci.go:107] Successfully prepared a docker volume force-systemd-flag-020000
	I1002 17:40:51.130416   56133 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:40:51.130431   56133 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:40:51.130537   56133 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-020000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:46:50.405547   56133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:46:50.405695   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:50.458966   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:50.459088   56133 retry.go:31] will retry after 310.216993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:50.770272   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:50.822020   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:50.822126   56133 retry.go:31] will retry after 554.335322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:51.376812   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:51.430328   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:51.430430   56133 retry.go:31] will retry after 424.238257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:51.855063   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:51.908962   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	W1002 17:46:51.909060   56133 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	W1002 17:46:51.909083   56133 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:51.909140   56133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:46:51.909197   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:51.960042   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:51.960140   56133 retry.go:31] will retry after 237.22651ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:52.198643   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:52.250506   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:52.250605   56133 retry.go:31] will retry after 291.129585ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:52.544052   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:52.597549   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:52.597639   56133 retry.go:31] will retry after 430.196357ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:53.028505   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:46:53.081726   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	W1002 17:46:53.081843   56133 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	W1002 17:46:53.081869   56133 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:53.081881   56133 start.go:128] duration metric: createHost completed in 6m2.700383016s
	I1002 17:46:53.081889   56133 start.go:83] releasing machines lock for "force-systemd-flag-020000", held for 6m2.700483221s
	W1002 17:46:53.081902   56133 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1002 17:46:53.082324   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:53.134031   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:46:53.134079   56133 delete.go:82] Unable to get host status for force-systemd-flag-020000, assuming it has already been deleted: state: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	W1002 17:46:53.134150   56133 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1002 17:46:53.134160   56133 start.go:703] Will try again in 5 seconds ...
	I1002 17:46:58.135492   56133 start.go:365] acquiring machines lock for force-systemd-flag-020000: {Name:mke77e6777a3733bf393c12a7b381d3b53a60ecf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:46:58.136410   56133 start.go:369] acquired machines lock for "force-systemd-flag-020000" in 773.891µs
	I1002 17:46:58.136477   56133 start.go:96] Skipping create...Using existing machine configuration
	I1002 17:46:58.136492   56133 fix.go:54] fixHost starting: 
	I1002 17:46:58.136979   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:58.189927   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:46:58.189970   56133 fix.go:102] recreateIfNeeded on force-systemd-flag-020000: state= err=unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:58.190038   56133 fix.go:107] machineExists: false. err=machine does not exist
	I1002 17:46:58.211906   56133 out.go:177] * docker "force-systemd-flag-020000" container is missing, will recreate.
	I1002 17:46:58.233561   56133 delete.go:124] DEMOLISHING force-systemd-flag-020000 ...
	I1002 17:46:58.233767   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:58.284883   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	W1002 17:46:58.284936   56133 stop.go:75] unable to get state: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:58.284958   56133 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:58.285363   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:58.335589   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:46:58.335637   56133 delete.go:82] Unable to get host status for force-systemd-flag-020000, assuming it has already been deleted: state: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:58.335709   56133 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-020000
	W1002 17:46:58.385844   56133 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-020000 returned with exit code 1
	I1002 17:46:58.385886   56133 kic.go:367] could not find the container force-systemd-flag-020000 to remove it. will try anyways
	I1002 17:46:58.385962   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:58.436514   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	W1002 17:46:58.436560   56133 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:58.436638   56133 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-020000 /bin/bash -c "sudo init 0"
	W1002 17:46:58.486817   56133 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-020000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 17:46:58.486853   56133 oci.go:647] error shutdown force-systemd-flag-020000: docker exec --privileged -t force-systemd-flag-020000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:59.488298   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:59.540322   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:46:59.540377   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:59.540392   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:46:59.540414   56133 retry.go:31] will retry after 294.20643ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:59.836278   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:46:59.890508   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:46:59.890561   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:46:59.890580   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:46:59.890602   56133 retry.go:31] will retry after 493.48467ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:00.385193   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:47:00.438958   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:00.439012   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:00.439025   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:47:00.439046   56133 retry.go:31] will retry after 577.875258ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:01.019307   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:47:01.071935   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:01.071987   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:01.071995   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:47:01.072021   56133 retry.go:31] will retry after 1.353873375s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:02.426380   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:47:02.479534   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:02.479583   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:02.479596   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:47:02.479618   56133 retry.go:31] will retry after 2.091910937s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:04.573732   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:47:04.639955   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:04.640002   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:04.640016   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:47:04.640041   56133 retry.go:31] will retry after 4.754252693s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:09.395053   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:47:09.447329   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:09.447381   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:09.447398   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:47:09.447422   56133 retry.go:31] will retry after 7.413027306s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:16.863082   56133 cli_runner.go:164] Run: docker container inspect force-systemd-flag-020000 --format={{.State.Status}}
	W1002 17:47:16.918114   56133 cli_runner.go:211] docker container inspect force-systemd-flag-020000 --format={{.State.Status}} returned with exit code 1
	I1002 17:47:16.918161   56133 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:47:16.918174   56133 oci.go:661] temporary error: container force-systemd-flag-020000 status is  but expect it to be exited
	I1002 17:47:16.918201   56133 oci.go:88] couldn't shut down force-systemd-flag-020000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	 
	I1002 17:47:16.918286   56133 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-020000
	I1002 17:47:16.969761   56133 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-020000
	W1002 17:47:17.019538   56133 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-020000 returned with exit code 1
	I1002 17:47:17.019667   56133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-020000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:47:17.070898   56133 cli_runner.go:164] Run: docker network rm force-systemd-flag-020000
	I1002 17:47:17.171368   56133 fix.go:114] Sleeping 1 second for extra luck!
	I1002 17:47:18.173588   56133 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:47:18.195704   56133 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:47:18.195892   56133 start.go:159] libmachine.API.Create for "force-systemd-flag-020000" (driver="docker")
	I1002 17:47:18.195917   56133 client.go:168] LocalClient.Create starting
	I1002 17:47:18.196071   56133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:47:18.196136   56133 main.go:141] libmachine: Decoding PEM data...
	I1002 17:47:18.196156   56133 main.go:141] libmachine: Parsing certificate...
	I1002 17:47:18.196218   56133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:47:18.196262   56133 main.go:141] libmachine: Decoding PEM data...
	I1002 17:47:18.196280   56133 main.go:141] libmachine: Parsing certificate...
	I1002 17:47:18.196777   56133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-020000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:47:18.249231   56133 cli_runner.go:211] docker network inspect force-systemd-flag-020000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:47:18.249335   56133 network_create.go:281] running [docker network inspect force-systemd-flag-020000] to gather additional debugging logs...
	I1002 17:47:18.249349   56133 cli_runner.go:164] Run: docker network inspect force-systemd-flag-020000
	W1002 17:47:18.299843   56133 cli_runner.go:211] docker network inspect force-systemd-flag-020000 returned with exit code 1
	I1002 17:47:18.299868   56133 network_create.go:284] error running [docker network inspect force-systemd-flag-020000]: docker network inspect force-systemd-flag-020000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-020000 not found
	I1002 17:47:18.299884   56133 network_create.go:286] output of [docker network inspect force-systemd-flag-020000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-020000 not found
	
	** /stderr **
	I1002 17:47:18.300041   56133 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:47:18.353184   56133 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:18.354575   56133 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:18.356164   56133 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:18.357787   56133 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:47:18.358140   56133 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00101d580}
	I1002 17:47:18.358152   56133 network_create.go:124] attempt to create docker network force-systemd-flag-020000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1002 17:47:18.358223   56133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-020000 force-systemd-flag-020000
	I1002 17:47:18.445243   56133 network_create.go:108] docker network force-systemd-flag-020000 192.168.85.0/24 created
	I1002 17:47:18.445298   56133 kic.go:117] calculated static IP "192.168.85.2" for the "force-systemd-flag-020000" container
	I1002 17:47:18.445407   56133 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:47:18.499325   56133 cli_runner.go:164] Run: docker volume create force-systemd-flag-020000 --label name.minikube.sigs.k8s.io=force-systemd-flag-020000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:47:18.549277   56133 oci.go:103] Successfully created a docker volume force-systemd-flag-020000
	I1002 17:47:18.549405   56133 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-020000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-020000 --entrypoint /usr/bin/test -v force-systemd-flag-020000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:47:18.876610   56133 oci.go:107] Successfully prepared a docker volume force-systemd-flag-020000
	I1002 17:47:18.876639   56133 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:47:18.876651   56133 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:47:18.876753   56133 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-020000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:53:18.222542   56133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:53:18.222661   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:18.277264   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:18.277373   56133 retry.go:31] will retry after 333.217796ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:18.613002   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:18.666682   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:18.666786   56133 retry.go:31] will retry after 254.899458ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:18.923857   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:18.977886   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:18.978003   56133 retry.go:31] will retry after 710.503322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:19.690907   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:19.744726   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	W1002 17:53:19.744841   56133 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	W1002 17:53:19.744860   56133 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:19.744919   56133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:53:19.744988   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:19.796024   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:19.796128   56133 retry.go:31] will retry after 294.982367ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:20.093120   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:20.148854   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:20.148946   56133 retry.go:31] will retry after 327.962508ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:20.477289   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:20.531219   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:20.531318   56133 retry.go:31] will retry after 794.487688ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:21.327275   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:21.379922   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	W1002 17:53:21.380038   56133 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	W1002 17:53:21.380065   56133 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:21.380080   56133 start.go:128] duration metric: createHost completed in 6m3.181457133s
	I1002 17:53:21.380141   56133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:53:21.380201   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:21.430166   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:21.430268   56133 retry.go:31] will retry after 190.134575ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:21.622771   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:21.676938   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:21.677030   56133 retry.go:31] will retry after 355.696017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:22.035136   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:22.088535   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:22.088631   56133 retry.go:31] will retry after 609.02318ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:22.700101   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:22.755346   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	W1002 17:53:22.755446   56133 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	W1002 17:53:22.755465   56133 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:22.755527   56133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:53:22.755591   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:22.805446   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:22.805539   56133 retry.go:31] will retry after 194.393623ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:23.002329   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:23.057190   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:23.057278   56133 retry.go:31] will retry after 248.057ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:23.307695   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:23.362888   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	I1002 17:53:23.362976   56133 retry.go:31] will retry after 498.302868ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:23.863800   56133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000
	W1002 17:53:23.919508   56133 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000 returned with exit code 1
	W1002 17:53:23.919617   56133 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	W1002 17:53:23.919637   56133 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-020000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-020000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	I1002 17:53:23.919649   56133 fix.go:56] fixHost completed within 6m25.756333592s
	I1002 17:53:23.919658   56133 start.go:83] releasing machines lock for "force-systemd-flag-020000", held for 6m25.756394025s
	W1002 17:53:23.919742   56133 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-020000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-020000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 17:53:23.963201   56133 out.go:177] 
	W1002 17:53:23.985426   56133 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 17:53:23.985479   56133 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1002 17:53:23.985523   56133 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1002 17:53:24.007006   56133 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-020000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-020000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-020000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (192.844788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-020000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-10-02 17:53:24.259267 -0700 PDT m=+6651.970622262
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-020000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-020000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-020000",
	        "Id": "b0284239b5648523be14dc4784d95daaf69c6c6efe6c350ebce96a3fccc690bc",
	        "Created": "2023-10-03T00:47:18.403026615Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-020000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-020000 -n force-systemd-flag-020000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-020000 -n force-systemd-flag-020000: exit status 7 (94.064685ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:53:24.406985   56584 status.go:249] status error: host: state: unknown state "force-systemd-flag-020000": docker container inspect force-systemd-flag-020000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-020000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-020000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-020000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-020000
--- FAIL: TestForceSystemdFlag (755.58s)

                                                
                                    
x
+
TestForceSystemdEnv (756.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-153000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1002 17:30:21.520774   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:30:33.552209   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:33:24.709766   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:35:21.662553   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:35:33.695308   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:38:36.765290   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:40:21.683042   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:40:33.714730   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-153000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.720340624s)

                                                
                                                
-- stdout --
	* [force-systemd-env-153000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-153000 in cluster force-systemd-env-153000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-153000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:28:44.212642   55759 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:28:44.213254   55759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:28:44.213273   55759 out.go:309] Setting ErrFile to fd 2...
	I1002 17:28:44.213280   55759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:28:44.213883   55759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:28:44.215510   55759 out.go:303] Setting JSON to false
	I1002 17:28:44.237851   55759 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25093,"bootTime":1696267831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 17:28:44.237964   55759 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 17:28:44.260113   55759 out.go:177] * [force-systemd-env-153000] minikube v1.31.2 on Darwin 14.0
	I1002 17:28:44.323921   55759 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 17:28:44.302320   55759 notify.go:220] Checking for updates...
	I1002 17:28:44.365965   55759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 17:28:44.386785   55759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 17:28:44.408093   55759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 17:28:44.429099   55759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 17:28:44.449861   55759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 17:28:44.471655   55759 config.go:182] Loaded profile config "offline-docker-088000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 17:28:44.471773   55759 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 17:28:44.529520   55759 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 17:28:44.529676   55759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:28:44.632445   55759 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:160 SystemTime:2023-10-03 00:28:44.621204278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:28:44.674315   55759 out.go:177] * Using the docker driver based on user configuration
	I1002 17:28:44.711518   55759 start.go:298] selected driver: docker
	I1002 17:28:44.711536   55759 start.go:902] validating driver "docker" against <nil>
	I1002 17:28:44.711546   55759 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 17:28:44.714910   55759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:28:44.812233   55759 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:160 SystemTime:2023-10-03 00:28:44.801137395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker
Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:28:44.812460   55759 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 17:28:44.812650   55759 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 17:28:44.834089   55759 out.go:177] * Using Docker Desktop driver with root privileges
	I1002 17:28:44.855762   55759 cni.go:84] Creating CNI manager for ""
	I1002 17:28:44.855789   55759 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 17:28:44.855805   55759 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 17:28:44.855824   55759 start_flags.go:321] config:
	{Name:force-systemd-env-153000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-153000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 17:28:44.899994   55759 out.go:177] * Starting control plane node force-systemd-env-153000 in cluster force-systemd-env-153000
	I1002 17:28:44.920823   55759 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 17:28:44.942993   55759 out.go:177] * Pulling base image ...
	I1002 17:28:44.985863   55759 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:28:44.985943   55759 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 17:28:44.985955   55759 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 17:28:44.985986   55759 cache.go:57] Caching tarball of preloaded images
	I1002 17:28:44.986161   55759 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 17:28:44.986186   55759 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 17:28:44.986312   55759 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/force-systemd-env-153000/config.json ...
	I1002 17:28:44.986385   55759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/force-systemd-env-153000/config.json: {Name:mk2b1b55a6be03c75294a27364f8bbf1480cb5db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 17:28:45.038036   55759 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 17:28:45.038060   55759 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 17:28:45.038076   55759 cache.go:195] Successfully downloaded all kic artifacts
	I1002 17:28:45.038114   55759 start.go:365] acquiring machines lock for force-systemd-env-153000: {Name:mk455e5f1097dcc9afa1ea92378488f10a6fb589 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:28:45.038249   55759 start.go:369] acquired machines lock for "force-systemd-env-153000" in 122.557µs
	I1002 17:28:45.038274   55759 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-153000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-153000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 17:28:45.038327   55759 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:28:45.059833   55759 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:28:45.060168   55759 start.go:159] libmachine.API.Create for "force-systemd-env-153000" (driver="docker")
	I1002 17:28:45.060214   55759 client.go:168] LocalClient.Create starting
	I1002 17:28:45.060350   55759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:28:45.060426   55759 main.go:141] libmachine: Decoding PEM data...
	I1002 17:28:45.060461   55759 main.go:141] libmachine: Parsing certificate...
	I1002 17:28:45.060570   55759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:28:45.060631   55759 main.go:141] libmachine: Decoding PEM data...
	I1002 17:28:45.060645   55759 main.go:141] libmachine: Parsing certificate...
	I1002 17:28:45.081424   55759 cli_runner.go:164] Run: docker network inspect force-systemd-env-153000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:28:45.133429   55759 cli_runner.go:211] docker network inspect force-systemd-env-153000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:28:45.133553   55759 network_create.go:281] running [docker network inspect force-systemd-env-153000] to gather additional debugging logs...
	I1002 17:28:45.133568   55759 cli_runner.go:164] Run: docker network inspect force-systemd-env-153000
	W1002 17:28:45.184164   55759 cli_runner.go:211] docker network inspect force-systemd-env-153000 returned with exit code 1
	I1002 17:28:45.184193   55759 network_create.go:284] error running [docker network inspect force-systemd-env-153000]: docker network inspect force-systemd-env-153000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-153000 not found
	I1002 17:28:45.184207   55759 network_create.go:286] output of [docker network inspect force-systemd-env-153000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-153000 not found
	
	** /stderr **
	I1002 17:28:45.184353   55759 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:28:45.236378   55759 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:28:45.237889   55759 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:28:45.238275   55759 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0009e5f90}
	I1002 17:28:45.238296   55759 network_create.go:124] attempt to create docker network force-systemd-env-153000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1002 17:28:45.238366   55759 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-153000 force-systemd-env-153000
	W1002 17:28:45.289566   55759 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-153000 force-systemd-env-153000 returned with exit code 1
	W1002 17:28:45.289602   55759 network_create.go:149] failed to create docker network force-systemd-env-153000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-153000 force-systemd-env-153000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1002 17:28:45.289625   55759 network_create.go:116] failed to create docker network force-systemd-env-153000 192.168.67.0/24, will retry: subnet is taken
	I1002 17:28:45.290992   55759 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:28:45.291377   55759 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f8fdb0}
	I1002 17:28:45.291391   55759 network_create.go:124] attempt to create docker network force-systemd-env-153000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1002 17:28:45.291463   55759 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-153000 force-systemd-env-153000
	I1002 17:28:45.378714   55759 network_create.go:108] docker network force-systemd-env-153000 192.168.76.0/24 created
	I1002 17:28:45.378760   55759 kic.go:117] calculated static IP "192.168.76.2" for the "force-systemd-env-153000" container
	I1002 17:28:45.378879   55759 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:28:45.430884   55759 cli_runner.go:164] Run: docker volume create force-systemd-env-153000 --label name.minikube.sigs.k8s.io=force-systemd-env-153000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:28:45.482572   55759 oci.go:103] Successfully created a docker volume force-systemd-env-153000
	I1002 17:28:45.482686   55759 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-153000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-153000 --entrypoint /usr/bin/test -v force-systemd-env-153000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:28:45.876421   55759 oci.go:107] Successfully prepared a docker volume force-systemd-env-153000
	I1002 17:28:45.876460   55759 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:28:45.876474   55759 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:28:45.876567   55759 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-153000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:34:45.204295   55759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:34:45.204429   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:45.258441   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:34:45.258574   55759 retry.go:31] will retry after 330.976252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:45.590294   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:45.645549   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:34:45.645657   55759 retry.go:31] will retry after 315.047795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:45.962504   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:46.016440   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:34:46.016537   55759 retry.go:31] will retry after 698.741645ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:46.715658   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:46.771938   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	W1002 17:34:46.772044   55759 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	W1002 17:34:46.772063   55759 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:46.772129   55759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:34:46.772201   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:46.822128   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:34:46.822221   55759 retry.go:31] will retry after 243.189808ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:47.067818   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:47.120395   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:34:47.120487   55759 retry.go:31] will retry after 289.848748ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:47.412021   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:47.466489   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:34:47.466577   55759 retry.go:31] will retry after 794.330733ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:48.263383   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:34:48.317871   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	W1002 17:34:48.317961   55759 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	W1002 17:34:48.317980   55759 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:48.317992   55759 start.go:128] duration metric: createHost completed in 6m3.136705262s
	I1002 17:34:48.318002   55759 start.go:83] releasing machines lock for "force-systemd-env-153000", held for 6m3.136798063s
	W1002 17:34:48.318015   55759 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1002 17:34:48.318479   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:48.370264   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:48.370316   55759 delete.go:82] Unable to get host status for force-systemd-env-153000, assuming it has already been deleted: state: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	W1002 17:34:48.370393   55759 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1002 17:34:48.370404   55759 start.go:703] Will try again in 5 seconds ...
	I1002 17:34:53.371762   55759 start.go:365] acquiring machines lock for force-systemd-env-153000: {Name:mk455e5f1097dcc9afa1ea92378488f10a6fb589 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:34:53.371944   55759 start.go:369] acquired machines lock for "force-systemd-env-153000" in 138.96µs
	I1002 17:34:53.371986   55759 start.go:96] Skipping create...Using existing machine configuration
	I1002 17:34:53.372001   55759 fix.go:54] fixHost starting: 
	I1002 17:34:53.372437   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:53.423876   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:53.423920   55759 fix.go:102] recreateIfNeeded on force-systemd-env-153000: state= err=unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:53.423943   55759 fix.go:107] machineExists: false. err=machine does not exist
	I1002 17:34:53.445607   55759 out.go:177] * docker "force-systemd-env-153000" container is missing, will recreate.
	I1002 17:34:53.489134   55759 delete.go:124] DEMOLISHING force-systemd-env-153000 ...
	I1002 17:34:53.489325   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:53.540907   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	W1002 17:34:53.540972   55759 stop.go:75] unable to get state: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:53.540991   55759 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:53.541351   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:53.590974   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:53.591040   55759 delete.go:82] Unable to get host status for force-systemd-env-153000, assuming it has already been deleted: state: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:53.591131   55759 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-153000
	W1002 17:34:53.641228   55759 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-153000 returned with exit code 1
	I1002 17:34:53.641264   55759 kic.go:367] could not find the container force-systemd-env-153000 to remove it. will try anyways
	I1002 17:34:53.641361   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:53.691673   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	W1002 17:34:53.691724   55759 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:53.691818   55759 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-153000 /bin/bash -c "sudo init 0"
	W1002 17:34:53.742457   55759 cli_runner.go:211] docker exec --privileged -t force-systemd-env-153000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 17:34:53.742490   55759 oci.go:647] error shutdown force-systemd-env-153000: docker exec --privileged -t force-systemd-env-153000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:54.743854   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:54.795780   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:54.795835   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:54.795848   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:34:54.795870   55759 retry.go:31] will retry after 393.341327ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:55.190928   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:55.246126   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:55.246180   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:55.246197   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:34:55.246231   55759 retry.go:31] will retry after 924.319959ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:56.172229   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:56.225010   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:56.225067   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:56.225075   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:34:56.225101   55759 retry.go:31] will retry after 938.282209ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:57.163771   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:57.214786   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:57.214831   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:57.214849   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:34:57.214888   55759 retry.go:31] will retry after 1.137495302s: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:58.353928   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:58.407061   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:58.407109   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:58.407121   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:34:58.407144   55759 retry.go:31] will retry after 1.449769647s: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:59.857442   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:34:59.914604   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:34:59.914661   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:34:59.914680   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:34:59.914710   55759 retry.go:31] will retry after 3.019699152s: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:35:02.937186   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:35:02.991207   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:35:02.991255   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:35:02.991269   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:35:02.991291   55759 retry.go:31] will retry after 3.090094413s: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:35:06.083344   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:35:06.137267   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:35:06.137318   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:35:06.137327   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:35:06.137350   55759 retry.go:31] will retry after 6.753603117s: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:35:12.892927   55759 cli_runner.go:164] Run: docker container inspect force-systemd-env-153000 --format={{.State.Status}}
	W1002 17:35:12.947839   55759 cli_runner.go:211] docker container inspect force-systemd-env-153000 --format={{.State.Status}} returned with exit code 1
	I1002 17:35:12.947887   55759 oci.go:659] temporary error verifying shutdown: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:35:12.947900   55759 oci.go:661] temporary error: container force-systemd-env-153000 status is  but expect it to be exited
	I1002 17:35:12.947927   55759 oci.go:88] couldn't shut down force-systemd-env-153000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	 
	I1002 17:35:12.948001   55759 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-153000
	I1002 17:35:12.998874   55759 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-153000
	W1002 17:35:13.049276   55759 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-153000 returned with exit code 1
	I1002 17:35:13.049392   55759 cli_runner.go:164] Run: docker network inspect force-systemd-env-153000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:35:13.100620   55759 cli_runner.go:164] Run: docker network rm force-systemd-env-153000
	I1002 17:35:13.198545   55759 fix.go:114] Sleeping 1 second for extra luck!
	I1002 17:35:14.199706   55759 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:35:14.222851   55759 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1002 17:35:14.223054   55759 start.go:159] libmachine.API.Create for "force-systemd-env-153000" (driver="docker")
	I1002 17:35:14.223088   55759 client.go:168] LocalClient.Create starting
	I1002 17:35:14.223351   55759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:35:14.223489   55759 main.go:141] libmachine: Decoding PEM data...
	I1002 17:35:14.223524   55759 main.go:141] libmachine: Parsing certificate...
	I1002 17:35:14.223629   55759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:35:14.223717   55759 main.go:141] libmachine: Decoding PEM data...
	I1002 17:35:14.223736   55759 main.go:141] libmachine: Parsing certificate...
	I1002 17:35:14.244918   55759 cli_runner.go:164] Run: docker network inspect force-systemd-env-153000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:35:14.297880   55759 cli_runner.go:211] docker network inspect force-systemd-env-153000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:35:14.297983   55759 network_create.go:281] running [docker network inspect force-systemd-env-153000] to gather additional debugging logs...
	I1002 17:35:14.297999   55759 cli_runner.go:164] Run: docker network inspect force-systemd-env-153000
	W1002 17:35:14.347464   55759 cli_runner.go:211] docker network inspect force-systemd-env-153000 returned with exit code 1
	I1002 17:35:14.347494   55759 network_create.go:284] error running [docker network inspect force-systemd-env-153000]: docker network inspect force-systemd-env-153000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-153000 not found
	I1002 17:35:14.347510   55759 network_create.go:286] output of [docker network inspect force-systemd-env-153000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-153000 not found
	
	** /stderr **
	I1002 17:35:14.347662   55759 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:35:14.399435   55759 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:35:14.401046   55759 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:35:14.402477   55759 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:35:14.404044   55759 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:35:14.405394   55759 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:35:14.405811   55759 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fce660}
	I1002 17:35:14.405825   55759 network_create.go:124] attempt to create docker network force-systemd-env-153000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1002 17:35:14.405895   55759 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-153000 force-systemd-env-153000
	I1002 17:35:14.492836   55759 network_create.go:108] docker network force-systemd-env-153000 192.168.94.0/24 created
	I1002 17:35:14.492871   55759 kic.go:117] calculated static IP "192.168.94.2" for the "force-systemd-env-153000" container
	I1002 17:35:14.492976   55759 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:35:14.545237   55759 cli_runner.go:164] Run: docker volume create force-systemd-env-153000 --label name.minikube.sigs.k8s.io=force-systemd-env-153000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:35:14.595374   55759 oci.go:103] Successfully created a docker volume force-systemd-env-153000
	I1002 17:35:14.595495   55759 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-153000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-153000 --entrypoint /usr/bin/test -v force-systemd-env-153000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:35:14.917591   55759 oci.go:107] Successfully prepared a docker volume force-systemd-env-153000
	I1002 17:35:14.917621   55759 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:35:14.917633   55759 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:35:14.917736   55759 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-153000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:41:14.250168   55759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:41:14.250299   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:14.303645   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:14.303766   55759 retry.go:31] will retry after 127.697117ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:14.432146   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:14.485844   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:14.485957   55759 retry.go:31] will retry after 443.063746ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:14.931427   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:14.986023   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:14.986123   55759 retry.go:31] will retry after 631.531924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:15.618326   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:15.673828   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	W1002 17:41:15.673945   55759 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	W1002 17:41:15.673969   55759 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:15.674029   55759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:41:15.674092   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:15.724760   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:15.724849   55759 retry.go:31] will retry after 301.217633ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:16.028480   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:16.081286   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:16.081376   55759 retry.go:31] will retry after 388.401456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:16.470100   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:16.525415   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:16.525527   55759 retry.go:31] will retry after 287.663337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:16.815533   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:16.869353   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	W1002 17:41:16.869456   55759 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	W1002 17:41:16.869480   55759 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:16.869489   55759 start.go:128] duration metric: createHost completed in 6m2.644977171s
	I1002 17:41:16.869559   55759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:41:16.869613   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:16.921057   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:16.921150   55759 retry.go:31] will retry after 271.031456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:17.193019   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:17.245796   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:17.245888   55759 retry.go:31] will retry after 260.753126ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:17.509145   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:17.563773   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:17.563863   55759 retry.go:31] will retry after 626.563117ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:18.192934   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:18.245917   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	W1002 17:41:18.246039   55759 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	W1002 17:41:18.246063   55759 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:18.246128   55759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:41:18.246188   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:18.297020   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:18.297112   55759 retry.go:31] will retry after 350.037954ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:18.649597   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:18.703608   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:18.703746   55759 retry.go:31] will retry after 484.910654ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:19.191155   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:19.243903   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	I1002 17:41:19.243994   55759 retry.go:31] will retry after 607.877843ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:19.854283   55759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000
	W1002 17:41:19.908403   55759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000 returned with exit code 1
	W1002 17:41:19.908517   55759 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	W1002 17:41:19.908536   55759 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-153000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-153000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	I1002 17:41:19.908553   55759 fix.go:56] fixHost completed within 6m26.509951705s
	I1002 17:41:19.908561   55759 start.go:83] releasing machines lock for "force-systemd-env-153000", held for 6m26.510002704s
	W1002 17:41:19.908643   55759 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-153000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-153000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 17:41:19.951981   55759 out.go:177] 
	W1002 17:41:19.974280   55759 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 17:41:19.974351   55759 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1002 17:41:19.974423   55759 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1002 17:41:20.017029   55759 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-153000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-153000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-153000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (193.063971ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-153000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-10-02 17:41:20.267069 -0700 PDT m=+5928.028196142
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-153000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-153000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-153000",
	        "Id": "0a3722a6fa205be867b23e3e0eaa6a15569a265b4eba511bbca42ae8ab1452ce",
	        "Created": "2023-10-03T00:35:14.451504534Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-153000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-153000 -n force-systemd-env-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-153000 -n force-systemd-env-153000: exit status 7 (93.69234ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:41:20.414108   56224 status.go:249] status error: host: state: unknown state "force-systemd-env-153000": docker container inspect force-systemd-env-153000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-153000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-153000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-153000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-153000
--- FAIL: TestForceSystemdEnv (756.80s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (262.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-450000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1002 16:13:05.327493   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:15:21.485524   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:15:33.517457   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:33.523999   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:33.536215   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:33.558443   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:33.598923   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:33.681014   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:33.841146   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:34.161904   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:34.802227   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:36.082665   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:38.644993   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:43.765323   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:15:49.173383   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:15:54.006125   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:16:14.488135   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:16:55.450301   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-450000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m22.607242678s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-450000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-450000 in cluster ingress-addon-legacy-450000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:12:54.420785   51464 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:12:54.421058   51464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:12:54.421064   51464 out.go:309] Setting ErrFile to fd 2...
	I1002 16:12:54.421068   51464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:12:54.421242   51464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:12:54.422684   51464 out.go:303] Setting JSON to false
	I1002 16:12:54.444273   51464 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":20543,"bootTime":1696267831,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:12:54.444375   51464 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:12:54.466779   51464 out.go:177] * [ingress-addon-legacy-450000] minikube v1.31.2 on Darwin 14.0
	I1002 16:12:54.532494   51464 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 16:12:54.510564   51464 notify.go:220] Checking for updates...
	I1002 16:12:54.575469   51464 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:12:54.597534   51464 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:12:54.618720   51464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:12:54.639512   51464 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 16:12:54.661664   51464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 16:12:54.683881   51464 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 16:12:54.741566   51464 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:12:54.741695   51464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:12:54.844805   51464 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-02 23:12:54.833980412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:12:54.866958   51464 out.go:177] * Using the docker driver based on user configuration
	I1002 16:12:54.909641   51464 start.go:298] selected driver: docker
	I1002 16:12:54.909693   51464 start.go:902] validating driver "docker" against <nil>
	I1002 16:12:54.909708   51464 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 16:12:54.914057   51464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:12:55.017338   51464 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-02 23:12:55.006453926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:12:55.017506   51464 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 16:12:55.017687   51464 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 16:12:55.038919   51464 out.go:177] * Using Docker Desktop driver with root privileges
	I1002 16:12:55.060134   51464 cni.go:84] Creating CNI manager for ""
	I1002 16:12:55.060170   51464 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 16:12:55.060183   51464 start_flags.go:321] config:
	{Name:ingress-addon-legacy-450000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-450000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:12:55.103095   51464 out.go:177] * Starting control plane node ingress-addon-legacy-450000 in cluster ingress-addon-legacy-450000
	I1002 16:12:55.125147   51464 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 16:12:55.146911   51464 out.go:177] * Pulling base image ...
	I1002 16:12:55.189153   51464 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 16:12:55.189216   51464 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 16:12:55.244389   51464 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 16:12:55.244419   51464 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 16:12:55.250836   51464 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1002 16:12:55.250847   51464 cache.go:57] Caching tarball of preloaded images
	I1002 16:12:55.251064   51464 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 16:12:55.271805   51464 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1002 16:12:55.315196   51464 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1002 16:12:55.396709   51464 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1002 16:13:00.375444   51464 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1002 16:13:00.375601   51464 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1002 16:13:01.008578   51464 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1002 16:13:01.008816   51464 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/config.json ...
	I1002 16:13:01.008839   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/config.json: {Name:mka070dd9cd0d1b02b2a1c927c1ef5681e7bd6c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:01.009194   51464 cache.go:195] Successfully downloaded all kic artifacts
	I1002 16:13:01.009223   51464 start.go:365] acquiring machines lock for ingress-addon-legacy-450000: {Name:mkb89043d9f4f044c90ab8779466da7dd05e9f48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 16:13:01.009309   51464 start.go:369] acquired machines lock for "ingress-addon-legacy-450000" in 78.284µs
	I1002 16:13:01.009330   51464 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-450000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-450000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 16:13:01.009379   51464 start.go:125] createHost starting for "" (driver="docker")
	I1002 16:13:01.030247   51464 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 16:13:01.030654   51464 start.go:159] libmachine.API.Create for "ingress-addon-legacy-450000" (driver="docker")
	I1002 16:13:01.030725   51464 client.go:168] LocalClient.Create starting
	I1002 16:13:01.030913   51464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 16:13:01.030991   51464 main.go:141] libmachine: Decoding PEM data...
	I1002 16:13:01.031022   51464 main.go:141] libmachine: Parsing certificate...
	I1002 16:13:01.031110   51464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 16:13:01.031170   51464 main.go:141] libmachine: Decoding PEM data...
	I1002 16:13:01.031193   51464 main.go:141] libmachine: Parsing certificate...
	I1002 16:13:01.053878   51464 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-450000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 16:13:01.107615   51464 cli_runner.go:211] docker network inspect ingress-addon-legacy-450000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 16:13:01.107739   51464 network_create.go:281] running [docker network inspect ingress-addon-legacy-450000] to gather additional debugging logs...
	I1002 16:13:01.107759   51464 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-450000
	W1002 16:13:01.158071   51464 cli_runner.go:211] docker network inspect ingress-addon-legacy-450000 returned with exit code 1
	I1002 16:13:01.158098   51464 network_create.go:284] error running [docker network inspect ingress-addon-legacy-450000]: docker network inspect ingress-addon-legacy-450000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-450000 not found
	I1002 16:13:01.158122   51464 network_create.go:286] output of [docker network inspect ingress-addon-legacy-450000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-450000 not found
	
	** /stderr **
	I1002 16:13:01.158268   51464 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 16:13:01.210639   51464 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000576b30}
	I1002 16:13:01.210676   51464 network_create.go:124] attempt to create docker network ingress-addon-legacy-450000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1002 16:13:01.210747   51464 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-450000 ingress-addon-legacy-450000
	I1002 16:13:01.298815   51464 network_create.go:108] docker network ingress-addon-legacy-450000 192.168.49.0/24 created
	I1002 16:13:01.298857   51464 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-450000" container
	I1002 16:13:01.298977   51464 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 16:13:01.349902   51464 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-450000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-450000 --label created_by.minikube.sigs.k8s.io=true
	I1002 16:13:01.403385   51464 oci.go:103] Successfully created a docker volume ingress-addon-legacy-450000
	I1002 16:13:01.403499   51464 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-450000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-450000 --entrypoint /usr/bin/test -v ingress-addon-legacy-450000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 16:13:01.831444   51464 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-450000
	I1002 16:13:01.831520   51464 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 16:13:01.831535   51464 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 16:13:01.831661   51464 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-450000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 16:13:04.620260   51464 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-450000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (2.788458697s)
	I1002 16:13:04.620287   51464 kic.go:199] duration metric: took 2.788693 seconds to extract preloaded images to volume
	I1002 16:13:04.620387   51464 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 16:13:04.725924   51464 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-450000 --name ingress-addon-legacy-450000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-450000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-450000 --network ingress-addon-legacy-450000 --ip 192.168.49.2 --volume ingress-addon-legacy-450000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 16:13:05.014085   51464 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Running}}
	I1002 16:13:05.072908   51464 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:13:05.134165   51464 cli_runner.go:164] Run: docker exec ingress-addon-legacy-450000 stat /var/lib/dpkg/alternatives/iptables
	I1002 16:13:05.254254   51464 oci.go:144] the created container "ingress-addon-legacy-450000" has a running status.
	I1002 16:13:05.254287   51464 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa...
	I1002 16:13:05.335304   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 16:13:05.335368   51464 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 16:13:05.408316   51464 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:13:05.470686   51464 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 16:13:05.470712   51464 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-450000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 16:13:05.585922   51464 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:13:05.644256   51464 machine.go:88] provisioning docker machine ...
	I1002 16:13:05.644300   51464 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-450000"
	I1002 16:13:05.644426   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:05.702429   51464 main.go:141] libmachine: Using SSH client type: native
	I1002 16:13:05.702791   51464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 57074 <nil> <nil>}
	I1002 16:13:05.702811   51464 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-450000 && echo "ingress-addon-legacy-450000" | sudo tee /etc/hostname
	I1002 16:13:05.846739   51464 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-450000
	
	I1002 16:13:05.846839   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:05.901358   51464 main.go:141] libmachine: Using SSH client type: native
	I1002 16:13:05.901658   51464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 57074 <nil> <nil>}
	I1002 16:13:05.901679   51464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-450000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-450000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-450000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 16:13:06.034331   51464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 16:13:06.034360   51464 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17323-48076/.minikube CaCertPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17323-48076/.minikube}
	I1002 16:13:06.034383   51464 ubuntu.go:177] setting up certificates
	I1002 16:13:06.034389   51464 provision.go:83] configureAuth start
	I1002 16:13:06.034468   51464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-450000
	I1002 16:13:06.086559   51464 provision.go:138] copyHostCerts
	I1002 16:13:06.086599   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17323-48076/.minikube/key.pem
	I1002 16:13:06.086668   51464 exec_runner.go:144] found /Users/jenkins/minikube-integration/17323-48076/.minikube/key.pem, removing ...
	I1002 16:13:06.086677   51464 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17323-48076/.minikube/key.pem
	I1002 16:13:06.086797   51464 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17323-48076/.minikube/key.pem (1675 bytes)
	I1002 16:13:06.086996   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.pem
	I1002 16:13:06.087023   51464 exec_runner.go:144] found /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.pem, removing ...
	I1002 16:13:06.087028   51464 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.pem
	I1002 16:13:06.087114   51464 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.pem (1078 bytes)
	I1002 16:13:06.087265   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17323-48076/.minikube/cert.pem
	I1002 16:13:06.087291   51464 exec_runner.go:144] found /Users/jenkins/minikube-integration/17323-48076/.minikube/cert.pem, removing ...
	I1002 16:13:06.087297   51464 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17323-48076/.minikube/cert.pem
	I1002 16:13:06.087369   51464 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17323-48076/.minikube/cert.pem (1123 bytes)
	I1002 16:13:06.087505   51464 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-450000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-450000]
	I1002 16:13:06.278373   51464 provision.go:172] copyRemoteCerts
	I1002 16:13:06.278454   51464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 16:13:06.278526   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:06.330434   51464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:13:06.423998   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 16:13:06.424080   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 16:13:06.447063   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 16:13:06.447163   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1002 16:13:06.469825   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 16:13:06.469897   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 16:13:06.493571   51464 provision.go:86] duration metric: configureAuth took 459.159995ms
	I1002 16:13:06.493587   51464 ubuntu.go:193] setting minikube options for container-runtime
	I1002 16:13:06.493738   51464 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:13:06.493811   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:06.546876   51464 main.go:141] libmachine: Using SSH client type: native
	I1002 16:13:06.547197   51464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 57074 <nil> <nil>}
	I1002 16:13:06.547215   51464 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 16:13:06.678967   51464 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 16:13:06.678985   51464 ubuntu.go:71] root file system type: overlay
	I1002 16:13:06.679084   51464 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 16:13:06.679187   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:06.731891   51464 main.go:141] libmachine: Using SSH client type: native
	I1002 16:13:06.732664   51464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 57074 <nil> <nil>}
	I1002 16:13:06.732719   51464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 16:13:06.875557   51464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 16:13:06.875665   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:06.930412   51464 main.go:141] libmachine: Using SSH client type: native
	I1002 16:13:06.930719   51464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil>  [] 0s} 127.0.0.1 57074 <nil> <nil>}
	I1002 16:13:06.930732   51464 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 16:13:07.536555   51464 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-02 23:13:06.872206857 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 16:13:07.536584   51464 machine.go:91] provisioned docker machine in 1.892270805s
	I1002 16:13:07.536612   51464 client.go:171] LocalClient.Create took 6.505733009s
	I1002 16:13:07.536636   51464 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-450000" took 6.505859209s
	I1002 16:13:07.536646   51464 start.go:300] post-start starting for "ingress-addon-legacy-450000" (driver="docker")
	I1002 16:13:07.536661   51464 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 16:13:07.536739   51464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 16:13:07.536855   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:07.594394   51464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:13:07.689646   51464 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 16:13:07.693826   51464 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 16:13:07.693848   51464 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 16:13:07.693858   51464 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 16:13:07.693862   51464 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 16:13:07.693872   51464 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17323-48076/.minikube/addons for local assets ...
	I1002 16:13:07.693982   51464 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17323-48076/.minikube/files for local assets ...
	I1002 16:13:07.694152   51464 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/485562.pem -> 485562.pem in /etc/ssl/certs
	I1002 16:13:07.694161   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/485562.pem -> /etc/ssl/certs/485562.pem
	I1002 16:13:07.694339   51464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 16:13:07.703565   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/485562.pem --> /etc/ssl/certs/485562.pem (1708 bytes)
	I1002 16:13:07.725848   51464 start.go:303] post-start completed in 189.189756ms
	I1002 16:13:07.726387   51464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-450000
	I1002 16:13:07.779911   51464 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/config.json ...
	I1002 16:13:07.780353   51464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 16:13:07.780411   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:07.834211   51464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:13:07.926240   51464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 16:13:07.931383   51464 start.go:128] duration metric: createHost completed in 6.921858936s
	I1002 16:13:07.931400   51464 start.go:83] releasing machines lock for "ingress-addon-legacy-450000", held for 6.921951521s
	I1002 16:13:07.931487   51464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-450000
	I1002 16:13:07.982462   51464 ssh_runner.go:195] Run: cat /version.json
	I1002 16:13:07.982490   51464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 16:13:07.982556   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:07.982564   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:08.041100   51464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:13:08.041085   51464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:13:08.236948   51464 ssh_runner.go:195] Run: systemctl --version
	I1002 16:13:08.242023   51464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 16:13:08.247480   51464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 16:13:08.271681   51464 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 16:13:08.271748   51464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1002 16:13:08.288648   51464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1002 16:13:08.305232   51464 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 16:13:08.305263   51464 start.go:469] detecting cgroup driver to use...
	I1002 16:13:08.305285   51464 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 16:13:08.305404   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 16:13:08.322093   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1002 16:13:08.332705   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 16:13:08.343522   51464 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 16:13:08.343587   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 16:13:08.354012   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 16:13:08.364601   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 16:13:08.375282   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 16:13:08.385965   51464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 16:13:08.395950   51464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 16:13:08.406772   51464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 16:13:08.416238   51464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 16:13:08.425443   51464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 16:13:08.484450   51464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 16:13:08.579737   51464 start.go:469] detecting cgroup driver to use...
	I1002 16:13:08.579760   51464 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 16:13:08.579838   51464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 16:13:08.597785   51464 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1002 16:13:08.597864   51464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 16:13:08.611455   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 16:13:08.630586   51464 ssh_runner.go:195] Run: which cri-dockerd
	I1002 16:13:08.635677   51464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 16:13:08.647244   51464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 16:13:08.668043   51464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 16:13:08.760372   51464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 16:13:08.855589   51464 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 16:13:08.855684   51464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 16:13:08.875103   51464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 16:13:08.957822   51464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 16:13:09.218977   51464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 16:13:09.246549   51464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 16:13:09.318423   51464 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1002 16:13:09.318582   51464 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-450000 dig +short host.docker.internal
	I1002 16:13:09.448286   51464 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1002 16:13:09.448383   51464 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1002 16:13:09.453414   51464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 16:13:09.465483   51464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:13:09.518747   51464 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 16:13:09.518816   51464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 16:13:09.540132   51464 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1002 16:13:09.540154   51464 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1002 16:13:09.540228   51464 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 16:13:09.549717   51464 ssh_runner.go:195] Run: which lz4
	I1002 16:13:09.554138   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1002 16:13:09.554271   51464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 16:13:09.558809   51464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 16:13:09.558833   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1002 16:13:15.355768   51464 docker.go:628] Took 5.801399 seconds to copy over tarball
	I1002 16:13:15.355848   51464 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 16:13:17.342341   51464 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986365438s)
	I1002 16:13:17.342378   51464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 16:13:17.402784   51464 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 16:13:17.413430   51464 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1002 16:13:17.431601   51464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 16:13:17.491162   51464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 16:13:18.532407   51464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.041182394s)
	I1002 16:13:18.532600   51464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 16:13:18.555390   51464 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1002 16:13:18.555410   51464 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1002 16:13:18.555422   51464 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 16:13:18.563135   51464 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 16:13:18.563146   51464 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1002 16:13:18.563427   51464 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 16:13:18.563826   51464 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 16:13:18.564368   51464 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 16:13:18.566629   51464 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 16:13:18.566699   51464 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 16:13:18.567618   51464 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 16:13:18.571224   51464 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 16:13:18.571321   51464 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 16:13:18.572830   51464 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1002 16:13:18.573169   51464 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 16:13:18.573414   51464 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 16:13:18.575611   51464 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 16:13:18.575889   51464 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 16:13:18.576105   51464 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 16:13:19.337831   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1002 16:13:19.359403   51464 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1002 16:13:19.359436   51464 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 16:13:19.359494   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1002 16:13:19.382416   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1002 16:13:19.782691   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1002 16:13:19.805551   51464 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1002 16:13:19.805589   51464 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I1002 16:13:19.805655   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1002 16:13:19.806225   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 16:13:19.829024   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1002 16:13:20.059397   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1002 16:13:20.084152   51464 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1002 16:13:20.084183   51464 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I1002 16:13:20.084253   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1002 16:13:20.106575   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1002 16:13:20.412431   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1002 16:13:20.434425   51464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1002 16:13:20.434451   51464 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 16:13:20.434511   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1002 16:13:20.456218   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1002 16:13:20.778391   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1002 16:13:20.799658   51464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1002 16:13:20.799693   51464 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 16:13:20.799750   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1002 16:13:20.820238   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1002 16:13:21.069089   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 16:13:21.091611   51464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1002 16:13:21.091664   51464 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 16:13:21.091734   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 16:13:21.113659   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1002 16:13:21.429460   51464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1002 16:13:21.451195   51464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1002 16:13:21.451221   51464 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 16:13:21.451287   51464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1002 16:13:21.471455   51464 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1002 16:13:21.471505   51464 cache_images.go:92] LoadImages completed in 2.915961882s
	W1002 16:13:21.471559   51464 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1002 16:13:21.471633   51464 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 16:13:21.526322   51464 cni.go:84] Creating CNI manager for ""
	I1002 16:13:21.526340   51464 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 16:13:21.526355   51464 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 16:13:21.526378   51464 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-450000 NodeName:ingress-addon-legacy-450000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 16:13:21.526497   51464 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-450000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 16:13:21.526567   51464 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-450000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-450000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 16:13:21.526652   51464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1002 16:13:21.536524   51464 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 16:13:21.536584   51464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 16:13:21.545881   51464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 16:13:21.564067   51464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1002 16:13:21.582141   51464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1002 16:13:21.599697   51464 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 16:13:21.604875   51464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 16:13:21.616804   51464 certs.go:56] Setting up /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000 for IP: 192.168.49.2
	I1002 16:13:21.616862   51464 certs.go:190] acquiring lock for shared ca certs: {Name:mk1a4dcdc0cbdf00aaf75115a29ae48b03ffc4e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.617089   51464 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.key
	I1002 16:13:21.617176   51464 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17323-48076/.minikube/proxy-client-ca.key
	I1002 16:13:21.617220   51464 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/client.key
	I1002 16:13:21.617235   51464 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/client.crt with IP's: []
	I1002 16:13:21.712754   51464 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/client.crt ...
	I1002 16:13:21.712769   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/client.crt: {Name:mk438807e583d3bb51f6d13afa09400f31f208c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.713078   51464 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/client.key ...
	I1002 16:13:21.713090   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/client.key: {Name:mk56322c6137939f1dc4057f03f176129f142b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.713302   51464 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key.dd3b5fb2
	I1002 16:13:21.713317   51464 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 16:13:21.839535   51464 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt.dd3b5fb2 ...
	I1002 16:13:21.839547   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt.dd3b5fb2: {Name:mk4fc73ea468f5209a7ff5f2ecfa2ea50e057592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.839816   51464 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key.dd3b5fb2 ...
	I1002 16:13:21.839828   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key.dd3b5fb2: {Name:mkc50cbdb463c53d80382cc172be5b4a6cca50b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.840022   51464 certs.go:337] copying /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt
	I1002 16:13:21.840235   51464 certs.go:341] copying /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key
	I1002 16:13:21.840422   51464 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.key
	I1002 16:13:21.840436   51464 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.crt with IP's: []
	I1002 16:13:21.940474   51464 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.crt ...
	I1002 16:13:21.940483   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.crt: {Name:mkd813df7712138002c4cc36bd05a25076ca2253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.940710   51464 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.key ...
	I1002 16:13:21.940722   51464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.key: {Name:mk4b65103a6ebf3808305d353923b2f6ee7199e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:13:21.940915   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 16:13:21.940941   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 16:13:21.940958   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 16:13:21.940978   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 16:13:21.940994   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 16:13:21.941011   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 16:13:21.941028   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 16:13:21.941049   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 16:13:21.941132   51464 certs.go:437] found cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/48556.pem (1338 bytes)
	W1002 16:13:21.941173   51464 certs.go:433] ignoring /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/48556_empty.pem, impossibly tiny 0 bytes
	I1002 16:13:21.941186   51464 certs.go:437] found cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 16:13:21.941216   51464 certs.go:437] found cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem (1078 bytes)
	I1002 16:13:21.941243   51464 certs.go:437] found cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem (1123 bytes)
	I1002 16:13:21.941278   51464 certs.go:437] found cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/certs/key.pem (1675 bytes)
	I1002 16:13:21.941345   51464 certs.go:437] found cert: /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/485562.pem (1708 bytes)
	I1002 16:13:21.941377   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/485562.pem -> /usr/share/ca-certificates/485562.pem
	I1002 16:13:21.941394   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 16:13:21.941412   51464 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/48556.pem -> /usr/share/ca-certificates/48556.pem
	I1002 16:13:21.941885   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 16:13:21.966932   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 16:13:21.991224   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 16:13:22.015271   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/ingress-addon-legacy-450000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 16:13:22.038467   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 16:13:22.062364   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 16:13:22.086341   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 16:13:22.110144   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 16:13:22.133743   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/ssl/certs/485562.pem --> /usr/share/ca-certificates/485562.pem (1708 bytes)
	I1002 16:13:22.157833   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 16:13:22.181282   51464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/48556.pem --> /usr/share/ca-certificates/48556.pem (1338 bytes)
	I1002 16:13:22.204660   51464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 16:13:22.222398   51464 ssh_runner.go:195] Run: openssl version
	I1002 16:13:22.229104   51464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485562.pem && ln -fs /usr/share/ca-certificates/485562.pem /etc/ssl/certs/485562.pem"
	I1002 16:13:22.239239   51464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485562.pem
	I1002 16:13:22.244545   51464 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 23:08 /usr/share/ca-certificates/485562.pem
	I1002 16:13:22.244601   51464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485562.pem
	I1002 16:13:22.251800   51464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/485562.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 16:13:22.262306   51464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 16:13:22.273352   51464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 16:13:22.278174   51464 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I1002 16:13:22.278219   51464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 16:13:22.285539   51464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 16:13:22.296034   51464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/48556.pem && ln -fs /usr/share/ca-certificates/48556.pem /etc/ssl/certs/48556.pem"
	I1002 16:13:22.306616   51464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48556.pem
	I1002 16:13:22.311460   51464 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 23:08 /usr/share/ca-certificates/48556.pem
	I1002 16:13:22.311519   51464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48556.pem
	I1002 16:13:22.318664   51464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/48556.pem /etc/ssl/certs/51391683.0"
	I1002 16:13:22.328880   51464 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 16:13:22.333345   51464 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 16:13:22.333393   51464 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-450000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-450000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:13:22.333495   51464 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 16:13:22.353179   51464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 16:13:22.362821   51464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 16:13:22.372510   51464 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 16:13:22.372568   51464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 16:13:22.382346   51464 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 16:13:22.382415   51464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 16:13:22.435874   51464 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 16:13:22.435938   51464 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 16:13:22.693438   51464 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 16:13:22.693558   51464 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 16:13:22.693642   51464 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 16:13:22.882121   51464 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 16:13:22.883030   51464 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 16:13:22.883068   51464 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 16:13:22.958341   51464 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 16:13:23.002682   51464 out.go:204]   - Generating certificates and keys ...
	I1002 16:13:23.002770   51464 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 16:13:23.002851   51464 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 16:13:23.015235   51464 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 16:13:23.086331   51464 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 16:13:23.227921   51464 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 16:13:23.334886   51464 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 16:13:23.406072   51464 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 16:13:23.406315   51464 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-450000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 16:13:23.503432   51464 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 16:13:23.503640   51464 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-450000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 16:13:23.715031   51464 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 16:13:23.867840   51464 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 16:13:24.125614   51464 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 16:13:24.125771   51464 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 16:13:24.249242   51464 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 16:13:24.311179   51464 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 16:13:24.515316   51464 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 16:13:24.785228   51464 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 16:13:24.785876   51464 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 16:13:24.807371   51464 out.go:204]   - Booting up control plane ...
	I1002 16:13:24.807465   51464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 16:13:24.807573   51464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 16:13:24.807645   51464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 16:13:24.807731   51464 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 16:13:24.807992   51464 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 16:14:04.797451   51464 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1002 16:14:04.798085   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:14:04.798300   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:14:09.799365   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:14:09.799579   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:14:19.801242   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:14:19.801504   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:14:39.802463   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:14:39.802614   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:15:19.806520   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:15:19.806870   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:15:19.806898   51464 kubeadm.go:322] 
	I1002 16:15:19.806976   51464 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1002 16:15:19.807056   51464 kubeadm.go:322] 		timed out waiting for the condition
	I1002 16:15:19.807071   51464 kubeadm.go:322] 
	I1002 16:15:19.807163   51464 kubeadm.go:322] 	This error is likely caused by:
	I1002 16:15:19.807217   51464 kubeadm.go:322] 		- The kubelet is not running
	I1002 16:15:19.807475   51464 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1002 16:15:19.807508   51464 kubeadm.go:322] 
	I1002 16:15:19.807689   51464 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1002 16:15:19.807760   51464 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1002 16:15:19.807831   51464 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1002 16:15:19.807844   51464 kubeadm.go:322] 
	I1002 16:15:19.807986   51464 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1002 16:15:19.808078   51464 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 16:15:19.808092   51464 kubeadm.go:322] 
	I1002 16:15:19.808156   51464 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1002 16:15:19.808189   51464 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1002 16:15:19.808243   51464 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1002 16:15:19.808277   51464 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1002 16:15:19.808283   51464 kubeadm.go:322] 
	I1002 16:15:19.809866   51464 kubeadm.go:322] W1002 23:13:22.434956    1714 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 16:15:19.810009   51464 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1002 16:15:19.810080   51464 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1002 16:15:19.810193   51464 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1002 16:15:19.810281   51464 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 16:15:19.810433   51464 kubeadm.go:322] W1002 23:13:24.791180    1714 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 16:15:19.810548   51464 kubeadm.go:322] W1002 23:13:24.792211    1714 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 16:15:19.810630   51464 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1002 16:15:19.810686   51464 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1002 16:15:19.810814   51464 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-450000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-450000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:13:22.434956    1714 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:13:24.791180    1714 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:13:24.792211    1714 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-450000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-450000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:13:22.434956    1714 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:13:24.791180    1714 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:13:24.792211    1714 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 16:15:19.810848   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1002 16:15:20.244144   51464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 16:15:20.256627   51464 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 16:15:20.256693   51464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 16:15:20.266133   51464 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 16:15:20.266165   51464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 16:15:20.316682   51464 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 16:15:20.316732   51464 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 16:15:20.567930   51464 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 16:15:20.568030   51464 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 16:15:20.568115   51464 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 16:15:20.750195   51464 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 16:15:20.750969   51464 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 16:15:20.751022   51464 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 16:15:20.834431   51464 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 16:15:20.856059   51464 out.go:204]   - Generating certificates and keys ...
	I1002 16:15:20.856133   51464 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 16:15:20.856190   51464 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 16:15:20.856243   51464 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 16:15:20.856302   51464 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 16:15:20.856419   51464 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 16:15:20.856491   51464 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 16:15:20.856576   51464 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 16:15:20.856701   51464 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 16:15:20.856806   51464 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 16:15:20.856903   51464 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 16:15:20.856964   51464 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 16:15:20.857100   51464 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 16:15:21.079608   51464 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 16:15:21.161060   51464 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 16:15:21.239055   51464 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 16:15:21.426514   51464 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 16:15:21.427183   51464 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 16:15:21.448667   51464 out.go:204]   - Booting up control plane ...
	I1002 16:15:21.448809   51464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 16:15:21.448971   51464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 16:15:21.449090   51464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 16:15:21.449238   51464 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 16:15:21.449611   51464 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 16:16:01.438701   51464 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1002 16:16:01.439494   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:16:01.439793   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:16:06.441189   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:16:06.441427   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:16:16.443333   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:16:16.443558   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:16:36.446434   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:16:36.446670   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:17:16.450605   51464 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 16:17:16.450934   51464 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 16:17:16.450988   51464 kubeadm.go:322] 
	I1002 16:17:16.451082   51464 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1002 16:17:16.451200   51464 kubeadm.go:322] 		timed out waiting for the condition
	I1002 16:17:16.451216   51464 kubeadm.go:322] 
	I1002 16:17:16.451273   51464 kubeadm.go:322] 	This error is likely caused by:
	I1002 16:17:16.451314   51464 kubeadm.go:322] 		- The kubelet is not running
	I1002 16:17:16.451434   51464 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1002 16:17:16.451446   51464 kubeadm.go:322] 
	I1002 16:17:16.451582   51464 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1002 16:17:16.451634   51464 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1002 16:17:16.451662   51464 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1002 16:17:16.451667   51464 kubeadm.go:322] 
	I1002 16:17:16.451751   51464 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1002 16:17:16.451817   51464 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 16:17:16.451824   51464 kubeadm.go:322] 
	I1002 16:17:16.451899   51464 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1002 16:17:16.451942   51464 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1002 16:17:16.452007   51464 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1002 16:17:16.452034   51464 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1002 16:17:16.452039   51464 kubeadm.go:322] 
	I1002 16:17:16.453969   51464 kubeadm.go:322] W1002 23:15:20.314807    4785 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 16:17:16.454115   51464 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1002 16:17:16.454235   51464 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1002 16:17:16.454352   51464 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1002 16:17:16.454435   51464 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 16:17:16.454560   51464 kubeadm.go:322] W1002 23:15:21.431048    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 16:17:16.454666   51464 kubeadm.go:322] W1002 23:15:21.431866    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 16:17:16.454737   51464 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1002 16:17:16.454811   51464 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1002 16:17:16.454836   51464 kubeadm.go:406] StartCluster complete in 3m54.113565173s
	I1002 16:17:16.454918   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1002 16:17:16.474312   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.474325   51464 logs.go:286] No container was found matching "kube-apiserver"
	I1002 16:17:16.474391   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1002 16:17:16.494451   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.494465   51464 logs.go:286] No container was found matching "etcd"
	I1002 16:17:16.494550   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1002 16:17:16.514801   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.514814   51464 logs.go:286] No container was found matching "coredns"
	I1002 16:17:16.514887   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1002 16:17:16.536439   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.536452   51464 logs.go:286] No container was found matching "kube-scheduler"
	I1002 16:17:16.536526   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1002 16:17:16.557384   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.557397   51464 logs.go:286] No container was found matching "kube-proxy"
	I1002 16:17:16.557475   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1002 16:17:16.579296   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.579311   51464 logs.go:286] No container was found matching "kube-controller-manager"
	I1002 16:17:16.579375   51464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1002 16:17:16.600140   51464 logs.go:284] 0 containers: []
	W1002 16:17:16.600173   51464 logs.go:286] No container was found matching "kindnet"
	I1002 16:17:16.600194   51464 logs.go:123] Gathering logs for Docker ...
	I1002 16:17:16.600210   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1002 16:17:16.639688   51464 logs.go:123] Gathering logs for container status ...
	I1002 16:17:16.639704   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 16:17:16.693300   51464 logs.go:123] Gathering logs for kubelet ...
	I1002 16:17:16.693314   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 16:17:16.731045   51464 logs.go:123] Gathering logs for dmesg ...
	I1002 16:17:16.731060   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 16:17:16.744927   51464 logs.go:123] Gathering logs for describe nodes ...
	I1002 16:17:16.744946   51464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 16:17:16.801528   51464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1002 16:17:16.801562   51464 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:15:20.314807    4785 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:15:21.431048    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:15:21.431866    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1002 16:17:16.801595   51464 out.go:239] * 
	* 
	W1002 16:17:16.801657   51464 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:15:20.314807    4785 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:15:21.431048    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:15:21.431866    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:15:20.314807    4785 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:15:21.431048    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:15:21.431866    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 16:17:16.801673   51464 out.go:239] * 
	* 
	W1002 16:17:16.802298   51464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 16:17:16.866284   51464 out.go:177] 
	W1002 16:17:16.910148   51464 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:15:20.314807    4785 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:15:21.431048    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:15:21.431866    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1002 23:15:20.314807    4785 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 23:15:21.431048    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1002 23:15:21.431866    4785 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 16:17:16.910219   51464 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1002 16:17:16.910249   51464 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1002 16:17:16.932156   51464 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-450000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (262.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (88.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-450000 addons enable ingress --alsologtostderr -v=5
E1002 16:18:17.373801   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-450000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m28.458018649s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:17:17.079448   51769 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:17:17.080506   51769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:17:17.080513   51769 out.go:309] Setting ErrFile to fd 2...
	I1002 16:17:17.080517   51769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:17:17.080700   51769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:17:17.081315   51769 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:17:17.081333   51769 addons.go:594] checking whether the cluster is paused
	I1002 16:17:17.081413   51769 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:17:17.081436   51769 host.go:66] Checking if "ingress-addon-legacy-450000" exists ...
	I1002 16:17:17.081839   51769 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:17:17.133938   51769 ssh_runner.go:195] Run: systemctl --version
	I1002 16:17:17.134033   51769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:17:17.184521   51769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:17:17.274308   51769 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 16:17:17.317357   51769 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1002 16:17:17.338048   51769 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:17:17.338066   51769 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-450000"
	I1002 16:17:17.338073   51769 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-450000"
	I1002 16:17:17.338109   51769 host.go:66] Checking if "ingress-addon-legacy-450000" exists ...
	I1002 16:17:17.338412   51769 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:17:17.411211   51769 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1002 16:17:17.433309   51769 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1002 16:17:17.456009   51769 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1002 16:17:17.477260   51769 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1002 16:17:17.499493   51769 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 16:17:17.499513   51769 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1002 16:17:17.499608   51769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:17:17.553700   51769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:17:17.655260   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:17.709817   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:17.709849   51769 retry.go:31] will retry after 302.829198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:18.012856   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:18.072092   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:18.072113   51769 retry.go:31] will retry after 415.218396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:18.488093   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:18.544220   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:18.544243   51769 retry.go:31] will retry after 537.538245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:19.082046   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:19.137171   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:19.137192   51769 retry.go:31] will retry after 798.070548ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:19.937697   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:19.993763   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:19.993793   51769 retry.go:31] will retry after 1.219012517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:21.214013   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:21.269913   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:21.269931   51769 retry.go:31] will retry after 1.001841981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:22.271967   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:22.328021   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:22.328042   51769 retry.go:31] will retry after 2.260207599s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:24.589801   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:24.645890   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:24.645908   51769 retry.go:31] will retry after 5.613921289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:30.261421   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:30.317530   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:30.317550   51769 retry.go:31] will retry after 9.579321196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:39.897800   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:39.953055   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:39.953072   51769 retry.go:31] will retry after 10.093230945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:50.046992   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:17:50.102502   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:17:50.102519   51769 retry.go:31] will retry after 10.817515003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:00.922857   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:18:00.979100   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:00.979117   51769 retry.go:31] will retry after 12.034824356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:13.016480   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:18:13.072482   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:13.072498   51769 retry.go:31] will retry after 32.248792631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:45.324864   51769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1002 16:18:45.382379   51769 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:45.382418   51769 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-450000"
	I1002 16:18:45.404082   51769 out.go:177] * Verifying ingress addon...
	I1002 16:18:45.426338   51769 out.go:177] 
	W1002 16:18:45.448101   51769 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-450000" does not exist: client config: context "ingress-addon-legacy-450000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-450000" does not exist: client config: context "ingress-addon-legacy-450000" does not exist]
	W1002 16:18:45.448132   51769 out.go:239] * 
	* 
	W1002 16:18:45.455697   51769 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 16:18:45.476685   51769 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-450000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-450000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3",
	        "Created": "2023-10-02T23:13:04.779768405Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53200,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T23:13:05.005117128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/hostname",
	        "HostsPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/hosts",
	        "LogPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3-json.log",
	        "Name": "/ingress-addon-legacy-450000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-450000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-450000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355-init/diff:/var/lib/docker/overlay2/f7e5e3aa1f48e8eadb56e8b7ed01e2d0233ed0b29a2fdad5ecdd5be32aa13d1d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-450000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-450000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-450000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-450000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-450000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "910d52723cb79b6db960c26282aaa3535cea5d05bfc4e7f141b726e235839b65",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/910d52723cb7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-450000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "38743eca4251",
	                        "ingress-addon-legacy-450000"
	                    ],
	                    "NetworkID": "4dd72b82929b6065f23718ba64bf196fa86b07566fca6cf8e7b900eb08b0da6d",
	                    "EndpointID": "20e3ce9f13c2ea991439f5a556cd2cfaf5a92190af5fc52e1c0593582e01c34e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-450000 -n ingress-addon-legacy-450000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-450000 -n ingress-addon-legacy-450000: exit status 6 (367.44254ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:18:45.909669   51811 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-450000" does not appear in /Users/jenkins/minikube-integration/17323-48076/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-450000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (88.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (102.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-450000 addons enable ingress-dns --alsologtostderr -v=5
E1002 16:20:21.498665   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-450000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m42.20084734s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:18:45.963444   51821 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:18:45.964076   51821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:18:45.964082   51821 out.go:309] Setting ErrFile to fd 2...
	I1002 16:18:45.964086   51821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:18:45.964274   51821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:18:45.964893   51821 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:18:45.964910   51821 addons.go:594] checking whether the cluster is paused
	I1002 16:18:45.964989   51821 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:18:45.965011   51821 host.go:66] Checking if "ingress-addon-legacy-450000" exists ...
	I1002 16:18:45.965412   51821 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:18:46.051602   51821 ssh_runner.go:195] Run: systemctl --version
	I1002 16:18:46.051697   51821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:18:46.103896   51821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:18:46.195412   51821 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 16:18:46.236000   51821 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1002 16:18:46.257925   51821 config.go:182] Loaded profile config "ingress-addon-legacy-450000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 16:18:46.257954   51821 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-450000"
	I1002 16:18:46.257972   51821 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-450000"
	I1002 16:18:46.258022   51821 host.go:66] Checking if "ingress-addon-legacy-450000" exists ...
	I1002 16:18:46.258578   51821 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-450000 --format={{.State.Status}}
	I1002 16:18:46.332017   51821 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1002 16:18:46.354071   51821 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1002 16:18:46.375647   51821 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 16:18:46.375666   51821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1002 16:18:46.375756   51821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-450000
	I1002 16:18:46.427304   51821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57074 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/ingress-addon-legacy-450000/id_rsa Username:docker}
	I1002 16:18:46.530352   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:46.588474   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:46.588510   51821 retry.go:31] will retry after 313.720987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:46.904549   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:46.960470   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:46.960495   51821 retry.go:31] will retry after 229.28861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:47.191606   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:47.246623   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:47.246646   51821 retry.go:31] will retry after 706.898976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:47.955853   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:48.013131   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:48.013152   51821 retry.go:31] will retry after 1.000297588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:49.013671   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:49.069571   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:49.069590   51821 retry.go:31] will retry after 1.75596761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:50.826648   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:50.884034   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:50.884059   51821 retry.go:31] will retry after 1.94257254s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:52.829013   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:52.887085   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:52.887104   51821 retry.go:31] will retry after 1.790092476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:54.677648   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:18:54.733570   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:18:54.733590   51821 retry.go:31] will retry after 6.040198225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:00.774184   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:19:00.832266   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:00.832286   51821 retry.go:31] will retry after 8.435598548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:09.270337   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:19:09.325545   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:09.325565   51821 retry.go:31] will retry after 6.432155102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:15.758640   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:19:15.817873   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:15.817901   51821 retry.go:31] will retry after 20.1061991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:35.927304   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:19:35.983748   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:35.987611   51821 retry.go:31] will retry after 23.935770677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:59.925054   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:19:59.982958   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:19:59.982982   51821 retry.go:31] will retry after 27.992163377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:20:27.976823   51821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1002 16:20:28.032015   51821 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 16:20:28.053742   51821 out.go:177] 
	W1002 16:20:28.074762   51821 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1002 16:20:28.074795   51821 out.go:239] * 
	* 
	W1002 16:20:28.082547   51821 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 16:20:28.103708   51821 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-450000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-450000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3",
	        "Created": "2023-10-02T23:13:04.779768405Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53200,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T23:13:05.005117128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/hostname",
	        "HostsPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/hosts",
	        "LogPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3-json.log",
	        "Name": "/ingress-addon-legacy-450000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-450000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-450000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355-init/diff:/var/lib/docker/overlay2/f7e5e3aa1f48e8eadb56e8b7ed01e2d0233ed0b29a2fdad5ecdd5be32aa13d1d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-450000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-450000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-450000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-450000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-450000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "910d52723cb79b6db960c26282aaa3535cea5d05bfc4e7f141b726e235839b65",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/910d52723cb7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-450000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "38743eca4251",
	                        "ingress-addon-legacy-450000"
	                    ],
	                    "NetworkID": "4dd72b82929b6065f23718ba64bf196fa86b07566fca6cf8e7b900eb08b0da6d",
	                    "EndpointID": "20e3ce9f13c2ea991439f5a556cd2cfaf5a92190af5fc52e1c0593582e01c34e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-450000 -n ingress-addon-legacy-450000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-450000 -n ingress-addon-legacy-450000: exit status 6 (386.614419ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:20:28.556178   51852 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-450000" does not appear in /Users/jenkins/minikube-integration/17323-48076/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-450000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (102.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:179: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-450000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-450000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3",
	        "Created": "2023-10-02T23:13:04.779768405Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53200,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T23:13:05.005117128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/hostname",
	        "HostsPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/hosts",
	        "LogPath": "/var/lib/docker/containers/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3/38743eca4251e6230337fce960a161bafebd892800b13dfa651c017865e5dde3-json.log",
	        "Name": "/ingress-addon-legacy-450000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-450000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-450000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355-init/diff:/var/lib/docker/overlay2/f7e5e3aa1f48e8eadb56e8b7ed01e2d0233ed0b29a2fdad5ecdd5be32aa13d1d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10e6ad8ba9a684ca9f5087bf7a887494aceb250c493ac9f7577875c2c0ef8355/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-450000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-450000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-450000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-450000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-450000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "910d52723cb79b6db960c26282aaa3535cea5d05bfc4e7f141b726e235839b65",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/910d52723cb7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-450000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "38743eca4251",
	                        "ingress-addon-legacy-450000"
	                    ],
	                    "NetworkID": "4dd72b82929b6065f23718ba64bf196fa86b07566fca6cf8e7b900eb08b0da6d",
	                    "EndpointID": "20e3ce9f13c2ea991439f5a556cd2cfaf5a92190af5fc52e1c0593582e01c34e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-450000 -n ingress-addon-legacy-450000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-450000 -n ingress-addon-legacy-450000: exit status 6 (366.758657ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:20:28.978428   51864 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-450000" does not appear in /Users/jenkins/minikube-integration/17323-48076/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-450000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (884.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-321000 ssh -- ls /minikube-host
E1002 16:25:21.389394   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:25:33.422069   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:26:44.438345   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:30:21.394782   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:30:33.427697   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:31:56.481750   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:35:21.403192   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:35:33.434836   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-321000 ssh -- ls /minikube-host: signal: killed (14m44.064024938s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-321000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-321000
helpers_test.go:235: (dbg) docker inspect mount-start-2-321000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56de4b4b3c031a6d1d986ee85067bd581e7a130ffa4687d2ed4c3086130183f1",
	        "Created": "2023-10-02T23:24:31.740613688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 101526,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T23:24:31.96406955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/56de4b4b3c031a6d1d986ee85067bd581e7a130ffa4687d2ed4c3086130183f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56de4b4b3c031a6d1d986ee85067bd581e7a130ffa4687d2ed4c3086130183f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/56de4b4b3c031a6d1d986ee85067bd581e7a130ffa4687d2ed4c3086130183f1/hosts",
	        "LogPath": "/var/lib/docker/containers/56de4b4b3c031a6d1d986ee85067bd581e7a130ffa4687d2ed4c3086130183f1/56de4b4b3c031a6d1d986ee85067bd581e7a130ffa4687d2ed4c3086130183f1-json.log",
	        "Name": "/mount-start-2-321000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-321000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-321000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/123dc21df1253fd117d9166d74ab2e86d63979b46bf08efd079b8dffa4285acb-init/diff:/var/lib/docker/overlay2/f7e5e3aa1f48e8eadb56e8b7ed01e2d0233ed0b29a2fdad5ecdd5be32aa13d1d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/123dc21df1253fd117d9166d74ab2e86d63979b46bf08efd079b8dffa4285acb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/123dc21df1253fd117d9166d74ab2e86d63979b46bf08efd079b8dffa4285acb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/123dc21df1253fd117d9166d74ab2e86d63979b46bf08efd079b8dffa4285acb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-321000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-321000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-321000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-321000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-321000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93d53e69dfc9fb205327dd98fdb7307e793f6e1b24c83c07d1d568c9137cd4d3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57360"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57361"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57357"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93d53e69dfc9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-321000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "56de4b4b3c03",
	                        "mount-start-2-321000"
	                    ],
	                    "NetworkID": "6954e55ccdafa987e16ca35b5d3bbb00e337ad314f8a7f9db872fa0fae964bf9",
	                    "EndpointID": "aff19a0c994f266ece5dc14eacf66842fb10d4273a9e5af0cb8eb48722efa904",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-321000 -n mount-start-2-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-321000 -n mount-start-2-321000: exit status 6 (367.034716ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:39:22.238542   53617 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-321000" does not appear in /Users/jenkins/minikube-integration/17323-48076/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-321000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (884.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (756.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1002 16:40:33.444793   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:43:24.469703   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:45:21.422616   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:45:33.455818   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:48:36.515350   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:50:21.433126   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:50:33.466381   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m36.59938605s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-053000 in cluster multinode-053000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:40:33.016156   53745 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:40:33.016356   53745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:40:33.016361   53745 out.go:309] Setting ErrFile to fd 2...
	I1002 16:40:33.016365   53745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:40:33.016542   53745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:40:33.018009   53745 out.go:303] Setting JSON to false
	I1002 16:40:33.040076   53745 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22202,"bootTime":1696267831,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:40:33.040183   53745 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:40:33.061426   53745 out.go:177] * [multinode-053000] minikube v1.31.2 on Darwin 14.0
	I1002 16:40:33.104556   53745 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 16:40:33.104651   53745 notify.go:220] Checking for updates...
	I1002 16:40:33.128357   53745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:40:33.148369   53745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:40:33.169177   53745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:40:33.190237   53745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 16:40:33.211374   53745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 16:40:33.232894   53745 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 16:40:33.290084   53745 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:40:33.290206   53745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:40:33.390495   53745 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-02 23:40:33.379581257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:40:33.432841   53745 out.go:177] * Using the docker driver based on user configuration
	I1002 16:40:33.454591   53745 start.go:298] selected driver: docker
	I1002 16:40:33.454620   53745 start.go:902] validating driver "docker" against <nil>
	I1002 16:40:33.454635   53745 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 16:40:33.458732   53745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:40:33.555696   53745 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-02 23:40:33.545306481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:40:33.555916   53745 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 16:40:33.556097   53745 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 16:40:33.580398   53745 out.go:177] * Using Docker Desktop driver with root privileges
	I1002 16:40:33.617778   53745 cni.go:84] Creating CNI manager for ""
	I1002 16:40:33.617806   53745 cni.go:136] 0 nodes found, recommending kindnet
	I1002 16:40:33.617820   53745 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 16:40:33.617842   53745 start_flags.go:321] config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:40:33.678670   53745 out.go:177] * Starting control plane node multinode-053000 in cluster multinode-053000
	I1002 16:40:33.701777   53745 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 16:40:33.723763   53745 out.go:177] * Pulling base image ...
	I1002 16:40:33.765820   53745 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:40:33.765909   53745 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 16:40:33.765930   53745 cache.go:57] Caching tarball of preloaded images
	I1002 16:40:33.765914   53745 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 16:40:33.766129   53745 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 16:40:33.766157   53745 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 16:40:33.767714   53745 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/multinode-053000/config.json ...
	I1002 16:40:33.767837   53745 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/multinode-053000/config.json: {Name:mked7aab2cc4ddc86be6266a6f1e912922dd40d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 16:40:33.817636   53745 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 16:40:33.817656   53745 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 16:40:33.817683   53745 cache.go:195] Successfully downloaded all kic artifacts
	I1002 16:40:33.817738   53745 start.go:365] acquiring machines lock for multinode-053000: {Name:mk8edede740faa8024fd55789b3e24ceccf4bf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 16:40:33.817881   53745 start.go:369] acquired machines lock for "multinode-053000" in 131.193µs
	I1002 16:40:33.817908   53745 start.go:93] Provisioning new machine with config: &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-053000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 16:40:33.817971   53745 start.go:125] createHost starting for "" (driver="docker")
	I1002 16:40:33.839742   53745 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 16:40:33.840125   53745 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1002 16:40:33.840179   53745 client.go:168] LocalClient.Create starting
	I1002 16:40:33.840371   53745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 16:40:33.840463   53745 main.go:141] libmachine: Decoding PEM data...
	I1002 16:40:33.840501   53745 main.go:141] libmachine: Parsing certificate...
	I1002 16:40:33.840614   53745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 16:40:33.840688   53745 main.go:141] libmachine: Decoding PEM data...
	I1002 16:40:33.840704   53745 main.go:141] libmachine: Parsing certificate...
	I1002 16:40:33.860867   53745 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 16:40:33.912715   53745 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 16:40:33.912820   53745 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1002 16:40:33.912834   53745 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1002 16:40:33.962969   53745 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1002 16:40:33.962995   53745 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1002 16:40:33.963013   53745 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1002 16:40:33.963162   53745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 16:40:34.015690   53745 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 16:40:34.016059   53745 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e9d0c0}
	I1002 16:40:34.016077   53745 network_create.go:124] attempt to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1002 16:40:34.016155   53745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1002 16:40:34.103486   53745 network_create.go:108] docker network multinode-053000 192.168.58.0/24 created
	I1002 16:40:34.103517   53745 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-053000" container
	I1002 16:40:34.103640   53745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 16:40:34.154951   53745 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1002 16:40:34.206297   53745 oci.go:103] Successfully created a docker volume multinode-053000
	I1002 16:40:34.206403   53745 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 16:40:34.626678   53745 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1002 16:40:34.626714   53745 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:40:34.626732   53745 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 16:40:34.626820   53745 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 16:46:33.853688   53745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 16:46:33.853823   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:33.909109   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:46:33.909263   53745 retry.go:31] will retry after 232.619691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:34.143339   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:34.196885   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:46:34.196976   53745 retry.go:31] will retry after 308.861953ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:34.506071   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:34.556089   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:46:34.556189   53745 retry.go:31] will retry after 823.162527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:35.381826   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:35.437465   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 16:46:35.437567   53745 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:46:35.437588   53745 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:35.437642   53745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 16:46:35.437697   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:35.487318   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:46:35.487404   53745 retry.go:31] will retry after 165.735386ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:35.655578   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:35.711176   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:46:35.711282   53745 retry.go:31] will retry after 325.440588ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:36.039102   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:36.095136   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:46:36.095232   53745 retry.go:31] will retry after 564.385411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:36.660018   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:46:36.716481   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 16:46:36.716580   53745 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:46:36.716605   53745 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:36.716619   53745 start.go:128] duration metric: createHost completed in 6m2.886386119s
	I1002 16:46:36.716627   53745 start.go:83] releasing machines lock for "multinode-053000", held for 6m2.886487187s
	W1002 16:46:36.716639   53745 start.go:688] error starting host: creating host: create host timed out in 360.000000 seconds
	I1002 16:46:36.717051   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:36.767744   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:36.767790   53745 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	W1002 16:46:36.767873   53745 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1002 16:46:36.767884   53745 start.go:703] Will try again in 5 seconds ...
	I1002 16:46:41.768325   53745 start.go:365] acquiring machines lock for multinode-053000: {Name:mk8edede740faa8024fd55789b3e24ceccf4bf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 16:46:41.768503   53745 start.go:369] acquired machines lock for "multinode-053000" in 137.921µs
	I1002 16:46:41.768539   53745 start.go:96] Skipping create...Using existing machine configuration
	I1002 16:46:41.768555   53745 fix.go:54] fixHost starting: 
	I1002 16:46:41.768985   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:41.821505   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:41.821549   53745 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:41.821580   53745 fix.go:107] machineExists: false. err=machine does not exist
	I1002 16:46:41.843290   53745 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1002 16:46:41.887074   53745 delete.go:124] DEMOLISHING multinode-053000 ...
	I1002 16:46:41.887322   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:41.939763   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 16:46:41.939818   53745 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:41.939849   53745 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:41.940190   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:41.989903   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:41.989946   53745 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:41.990036   53745 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 16:46:42.039780   53745 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 16:46:42.039808   53745 kic.go:367] could not find the container multinode-053000 to remove it. will try anyways
	I1002 16:46:42.039883   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:42.089914   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 16:46:42.089954   53745 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:42.090042   53745 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1002 16:46:42.140493   53745 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 16:46:42.140524   53745 oci.go:647] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:43.142612   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:43.197845   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:43.197889   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:43.197911   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:43.197935   53745 retry.go:31] will retry after 276.201037ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:43.474343   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:43.526312   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:43.526387   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:43.526410   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:43.526444   53745 retry.go:31] will retry after 705.052831ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:44.231970   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:44.284564   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:44.284612   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:44.284632   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:44.284655   53745 retry.go:31] will retry after 1.415191505s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:45.702284   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:45.756679   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:45.756722   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:45.756740   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:45.756761   53745 retry.go:31] will retry after 2.18985345s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:47.949025   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:48.003302   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:48.003357   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:48.003375   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:48.003397   53745 retry.go:31] will retry after 2.563927753s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:50.569885   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:50.620956   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:50.620997   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:50.621011   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:50.621034   53745 retry.go:31] will retry after 5.57509999s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:56.198713   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:46:56.254398   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:46:56.254448   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:46:56.254460   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:46:56.254480   53745 retry.go:31] will retry after 6.287786763s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:47:02.544705   53745 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:47:02.598467   53745 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:47:02.598511   53745 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:47:02.598526   53745 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:47:02.598554   53745 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1002 16:47:02.598629   53745 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1002 16:47:02.651076   53745 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 16:47:02.701077   53745 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 16:47:02.701193   53745 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 16:47:02.751896   53745 cli_runner.go:164] Run: docker network rm multinode-053000
	I1002 16:47:02.862248   53745 fix.go:114] Sleeping 1 second for extra luck!
	I1002 16:47:03.862845   53745 start.go:125] createHost starting for "" (driver="docker")
	I1002 16:47:03.883618   53745 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 16:47:03.883779   53745 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1002 16:47:03.883810   53745 client.go:168] LocalClient.Create starting
	I1002 16:47:03.884049   53745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 16:47:03.884148   53745 main.go:141] libmachine: Decoding PEM data...
	I1002 16:47:03.884176   53745 main.go:141] libmachine: Parsing certificate...
	I1002 16:47:03.884259   53745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 16:47:03.884330   53745 main.go:141] libmachine: Decoding PEM data...
	I1002 16:47:03.884356   53745 main.go:141] libmachine: Parsing certificate...
	I1002 16:47:03.905878   53745 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 16:47:03.960146   53745 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 16:47:03.960282   53745 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1002 16:47:03.960304   53745 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1002 16:47:04.010830   53745 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1002 16:47:04.010860   53745 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1002 16:47:04.010876   53745 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1002 16:47:04.011021   53745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 16:47:04.063174   53745 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 16:47:04.064752   53745 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 16:47:04.065113   53745 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e9d770}
	I1002 16:47:04.065133   53745 network_create.go:124] attempt to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1002 16:47:04.065204   53745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	W1002 16:47:04.115880   53745 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000 returned with exit code 1
	W1002 16:47:04.115921   53745 network_create.go:149] failed to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1002 16:47:04.115938   53745 network_create.go:116] failed to create docker network multinode-053000 192.168.67.0/24, will retry: subnet is taken
	I1002 16:47:04.117357   53745 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 16:47:04.117821   53745 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00059a0d0}
	I1002 16:47:04.117832   53745 network_create.go:124] attempt to create docker network multinode-053000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1002 16:47:04.117898   53745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1002 16:47:04.205127   53745 network_create.go:108] docker network multinode-053000 192.168.76.0/24 created
	I1002 16:47:04.205161   53745 kic.go:117] calculated static IP "192.168.76.2" for the "multinode-053000" container
	I1002 16:47:04.205310   53745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 16:47:04.256642   53745 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1002 16:47:04.307124   53745 oci.go:103] Successfully created a docker volume multinode-053000
	I1002 16:47:04.307253   53745 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 16:47:04.627144   53745 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1002 16:47:04.627172   53745 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:47:04.627184   53745 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 16:47:04.627315   53745 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 16:53:03.897652   53745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 16:53:03.897784   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:03.953470   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:03.953582   53745 retry.go:31] will retry after 294.516849ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:04.249372   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:04.303516   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:04.303610   53745 retry.go:31] will retry after 477.154783ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:04.782744   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:04.839367   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:04.839475   53745 retry.go:31] will retry after 375.944831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:05.217870   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:05.271030   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 16:53:05.271135   53745 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:53:05.271153   53745 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:05.271217   53745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 16:53:05.271283   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:05.321451   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:05.321547   53745 retry.go:31] will retry after 125.175571ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:05.449150   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:05.501902   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:05.501991   53745 retry.go:31] will retry after 436.438604ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:05.940338   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:05.996340   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:05.996436   53745 retry.go:31] will retry after 758.663847ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:06.757576   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:06.813082   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 16:53:06.813185   53745 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:53:06.813207   53745 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:06.813221   53745 start.go:128] duration metric: createHost completed in 6m2.937389314s
	I1002 16:53:06.813295   53745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 16:53:06.813355   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:06.865730   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:06.865821   53745 retry.go:31] will retry after 129.493782ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:06.995613   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:07.046542   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:07.046629   53745 retry.go:31] will retry after 560.990965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:07.608931   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:07.661777   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:07.661868   53745 retry.go:31] will retry after 336.224256ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:08.000268   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:08.074042   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 16:53:08.074142   53745 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:53:08.074159   53745 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:08.074222   53745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 16:53:08.074271   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:08.124411   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:08.124510   53745 retry.go:31] will retry after 144.543342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:08.270907   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:08.323090   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:08.323177   53745 retry.go:31] will retry after 207.287495ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:08.531104   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:08.584428   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 16:53:08.584518   53745 retry.go:31] will retry after 830.113991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:09.415738   53745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 16:53:09.470727   53745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 16:53:09.470824   53745 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:53:09.470847   53745 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:53:09.470855   53745 fix.go:56] fixHost completed within 6m27.688346459s
	I1002 16:53:09.470863   53745 start.go:83] releasing machines lock for "multinode-053000", held for 6m27.688391428s
	W1002 16:53:09.470948   53745 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 16:53:09.513427   53745 out.go:177] 
	W1002 16:53:09.535320   53745 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 16:53:09.535359   53745 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1002 16:53:09.535396   53745 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1002 16:53:09.556197   53745 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-053000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (95.412106ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:53:09.762084   53984 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (756.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (102.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (80.518957ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-053000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- rollout status deployment/busybox: exit status 1 (79.75802ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (79.469419ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (80.347225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (81.362451ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (83.42508ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.025214ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (81.946898ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (86.917343ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (82.478762ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.37177ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (83.427288ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (79.717166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.io: exit status 1 (79.722517ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default: exit status 1 (78.366761ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (79.442142ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (95.307271ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:52.648703   54047 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (102.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-053000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (79.643861ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-053000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (93.839287ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:52.877445   54056 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-053000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-053000 -v 3 --alsologtostderr: exit status 80 (193.565183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:54:52.920082   54060 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:54:52.920930   54060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:52.920937   54060 out.go:309] Setting ErrFile to fd 2...
	I1002 16:54:52.920942   54060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:52.921127   54060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:54:52.921481   54060 mustload.go:65] Loading cluster: multinode-053000
	I1002 16:54:52.921789   54060 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:54:52.922177   54060 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:54:52.972171   54060 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:54:52.997300   54060 out.go:177] 
	W1002 16:54:53.018544   54060 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 16:54:53.018579   54060 out.go:239] * 
	* 
	W1002 16:54:53.026120   54060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 16:54:53.047846   54060 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-053000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (94.460381ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:53.221474   54066 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:155: expected profile "multinode-053000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-321000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-053000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-053000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.2\",\"ClusterName\":\"multinode-053000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (94.580686ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:53.543207   54078 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --output json --alsologtostderr: exit status 7 (94.828289ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-053000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:54:53.586103   54082 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:54:53.586386   54082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:53.586391   54082 out.go:309] Setting ErrFile to fd 2...
	I1002 16:54:53.586395   54082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:53.586595   54082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:54:53.586781   54082 out.go:303] Setting JSON to true
	I1002 16:54:53.586805   54082 mustload.go:65] Loading cluster: multinode-053000
	I1002 16:54:53.586851   54082 notify.go:220] Checking for updates...
	I1002 16:54:53.587073   54082 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:54:53.587088   54082 status.go:255] checking status of multinode-053000 ...
	I1002 16:54:53.587486   54082 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:54:53.638151   54082 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:54:53.638211   54082 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1002 16:54:53.638235   54082 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1002 16:54:53.638256   54082 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 16:54:53.638264   54082 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-053000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (95.082072ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:53.788257   54088 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 node stop m03: exit status 85 (139.053707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status: exit status 7 (96.057508ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:54.024087   54094 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 16:54:54.024097   54094 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr: exit status 7 (94.520244ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:54:54.066341   54098 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:54:54.066625   54098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:54.066631   54098 out.go:309] Setting ErrFile to fd 2...
	I1002 16:54:54.066635   54098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:54.066837   54098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:54:54.067027   54098 out.go:303] Setting JSON to false
	I1002 16:54:54.067049   54098 mustload.go:65] Loading cluster: multinode-053000
	I1002 16:54:54.067083   54098 notify.go:220] Checking for updates...
	I1002 16:54:54.067326   54098 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:54:54.067340   54098 status.go:255] checking status of multinode-053000 ...
	I1002 16:54:54.067759   54098 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:54:54.118639   54098 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:54:54.118682   54098 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1002 16:54:54.118703   54098 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1002 16:54:54.118719   54098 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 16:54:54.118725   54098 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:233: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:237: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (93.375392ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:54.267497   54104 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 node start m03 --alsologtostderr: exit status 85 (138.170917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:54:54.366470   54110 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:54:54.367782   54110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:54.367792   54110 out.go:309] Setting ErrFile to fd 2...
	I1002 16:54:54.367797   54110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:54:54.367973   54110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:54:54.368302   54110 mustload.go:65] Loading cluster: multinode-053000
	I1002 16:54:54.368566   54110 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:54:54.390488   54110 out.go:177] 
	W1002 16:54:54.411483   54110 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1002 16:54:54.411508   54110 out.go:239] * 
	* 
	W1002 16:54:54.419484   54110 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 16:54:54.440582   54110 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1002 16:54:54.366470   54110 out.go:296] Setting OutFile to fd 1 ...
I1002 16:54:54.367782   54110 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:54:54.367792   54110 out.go:309] Setting ErrFile to fd 2...
I1002 16:54:54.367797   54110 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:54:54.367973   54110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
I1002 16:54:54.368302   54110 mustload.go:65] Loading cluster: multinode-053000
I1002 16:54:54.368566   54110 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:54:54.390488   54110 out.go:177] 
W1002 16:54:54.411483   54110 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1002 16:54:54.411508   54110 out.go:239] * 
* 
W1002 16:54:54.419484   54110 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 16:54:54.440582   54110 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status: exit status 7 (94.27583ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:54.556616   54112 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 16:54:54.556628   54112 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-053000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "5a61bd1d184b67bc2f78b8e13b1691afccb8ef53c717bc6a3aff22e5c8d088bc",
	        "Created": "2023-10-02T23:47:04.165472364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (94.279204ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 16:54:54.705776   54118 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (793.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-053000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-053000
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-053000: exit status 82 (15.962823415s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-053000" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr
E1002 16:55:21.443951   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:55:33.477491   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:00:04.506492   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:00:21.456212   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:00:33.486509   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:05:16.553873   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:05:21.466627   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:05:33.499878   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m57.115120661s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-053000 in cluster multinode-053000
	* Pulling base image ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:55:10.754925   54140 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:55:10.755119   54140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:55:10.755123   54140 out.go:309] Setting ErrFile to fd 2...
	I1002 16:55:10.755127   54140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:55:10.755306   54140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:55:10.756716   54140 out.go:303] Setting JSON to false
	I1002 16:55:10.778686   54140 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":23079,"bootTime":1696267831,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:55:10.778825   54140 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:55:10.800922   54140 out.go:177] * [multinode-053000] minikube v1.31.2 on Darwin 14.0
	I1002 16:55:10.864788   54140 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 16:55:10.843564   54140 notify.go:220] Checking for updates...
	I1002 16:55:10.908503   54140 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:55:10.929800   54140 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:55:10.951826   54140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:55:10.973620   54140 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 16:55:10.994854   54140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 16:55:11.017494   54140 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:55:11.017677   54140 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 16:55:11.076408   54140 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:55:11.076548   54140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:55:11.178710   54140 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:90 SystemTime:2023-10-02 23:55:11.167593451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:55:11.221699   54140 out.go:177] * Using the docker driver based on existing profile
	I1002 16:55:11.243689   54140 start.go:298] selected driver: docker
	I1002 16:55:11.243738   54140 start.go:902] validating driver "docker" against &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-053000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:55:11.243864   54140 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 16:55:11.244062   54140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:55:11.342614   54140 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:90 SystemTime:2023-10-02 23:55:11.332083669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:55:11.345634   54140 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 16:55:11.345668   54140 cni.go:84] Creating CNI manager for ""
	I1002 16:55:11.345676   54140 cni.go:136] 1 nodes found, recommending kindnet
	I1002 16:55:11.345686   54140 start_flags.go:321] config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:55:11.388037   54140 out.go:177] * Starting control plane node multinode-053000 in cluster multinode-053000
	I1002 16:55:11.408894   54140 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 16:55:11.429788   54140 out.go:177] * Pulling base image ...
	I1002 16:55:11.453143   54140 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:55:11.453233   54140 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 16:55:11.453265   54140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 16:55:11.453273   54140 cache.go:57] Caching tarball of preloaded images
	I1002 16:55:11.453446   54140 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 16:55:11.453464   54140 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 16:55:11.453581   54140 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/multinode-053000/config.json ...
	I1002 16:55:11.504801   54140 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 16:55:11.504826   54140 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 16:55:11.504847   54140 cache.go:195] Successfully downloaded all kic artifacts
	I1002 16:55:11.504903   54140 start.go:365] acquiring machines lock for multinode-053000: {Name:mk8edede740faa8024fd55789b3e24ceccf4bf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 16:55:11.504994   54140 start.go:369] acquired machines lock for "multinode-053000" in 64.307µs
	I1002 16:55:11.505023   54140 start.go:96] Skipping create...Using existing machine configuration
	I1002 16:55:11.505034   54140 fix.go:54] fixHost starting: 
	I1002 16:55:11.505255   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:11.556214   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:11.556254   54140 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:11.556277   54140 fix.go:107] machineExists: false. err=machine does not exist
	I1002 16:55:11.578142   54140 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1002 16:55:11.599737   54140 delete.go:124] DEMOLISHING multinode-053000 ...
	I1002 16:55:11.599941   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:11.652464   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 16:55:11.652508   54140 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:11.652528   54140 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:11.652879   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:11.703042   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:11.703101   54140 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:11.703188   54140 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 16:55:11.753018   54140 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 16:55:11.753051   54140 kic.go:367] could not find the container multinode-053000 to remove it. will try anyways
	I1002 16:55:11.753126   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:11.803365   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 16:55:11.803407   54140 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:11.803511   54140 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1002 16:55:11.854205   54140 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 16:55:11.854235   54140 oci.go:647] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:12.855914   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:12.909437   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:12.909478   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:12.909491   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:12.909524   54140 retry.go:31] will retry after 270.096318ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:13.181070   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:13.236468   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:13.236512   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:13.236526   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:13.236547   54140 retry.go:31] will retry after 726.087801ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:13.965123   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:14.020072   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:14.020125   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:14.020148   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:14.020173   54140 retry.go:31] will retry after 1.510738016s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:15.532774   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:15.585820   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:15.585865   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:15.585879   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:15.585899   54140 retry.go:31] will retry after 1.902490256s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:17.488947   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:17.543642   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:17.543689   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:17.543700   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:17.543720   54140 retry.go:31] will retry after 1.874669539s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:19.419261   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:19.474327   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:19.474376   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:19.474392   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:19.474414   54140 retry.go:31] will retry after 1.914978541s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:21.391964   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:21.445870   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:21.445914   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:21.445929   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:21.445952   54140 retry.go:31] will retry after 7.967483869s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:29.414503   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 16:55:29.468208   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 16:55:29.468253   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 16:55:29.468264   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 16:55:29.468290   54140 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1002 16:55:29.468375   54140 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1002 16:55:29.521408   54140 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 16:55:29.572848   54140 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 16:55:29.572958   54140 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 16:55:29.623635   54140 cli_runner.go:164] Run: docker network rm multinode-053000
	I1002 16:55:29.726874   54140 fix.go:114] Sleeping 1 second for extra luck!
	I1002 16:55:30.729055   54140 start.go:125] createHost starting for "" (driver="docker")
	I1002 16:55:30.751930   54140 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 16:55:30.752150   54140 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1002 16:55:30.752190   54140 client.go:168] LocalClient.Create starting
	I1002 16:55:30.752411   54140 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 16:55:30.752499   54140 main.go:141] libmachine: Decoding PEM data...
	I1002 16:55:30.752541   54140 main.go:141] libmachine: Parsing certificate...
	I1002 16:55:30.752660   54140 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 16:55:30.752725   54140 main.go:141] libmachine: Decoding PEM data...
	I1002 16:55:30.752753   54140 main.go:141] libmachine: Parsing certificate...
	I1002 16:55:30.753435   54140 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 16:55:30.805766   54140 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 16:55:30.805852   54140 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1002 16:55:30.805870   54140 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1002 16:55:30.856052   54140 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1002 16:55:30.856080   54140 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1002 16:55:30.856093   54140 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1002 16:55:30.856254   54140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 16:55:30.908790   54140 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 16:55:30.909182   54140 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00049ad70}
	I1002 16:55:30.909198   54140 network_create.go:124] attempt to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1002 16:55:30.909266   54140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1002 16:55:31.017138   54140 network_create.go:108] docker network multinode-053000 192.168.58.0/24 created
	I1002 16:55:31.017168   54140 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-053000" container
	I1002 16:55:31.017290   54140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 16:55:31.069207   54140 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1002 16:55:31.120238   54140 oci.go:103] Successfully created a docker volume multinode-053000
	I1002 16:55:31.120351   54140 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 16:55:31.465436   54140 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1002 16:55:31.465467   54140 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:55:31.465484   54140 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 16:55:31.465586   54140 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:01:30.766143   54140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:01:30.766272   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:30.820782   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:30.820896   54140 retry.go:31] will retry after 169.423113ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:30.991289   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:31.045130   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:31.045244   54140 retry.go:31] will retry after 329.687136ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:31.376307   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:31.431728   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:31.431826   54140 retry.go:31] will retry after 816.107414ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:32.250455   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:32.303148   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:01:32.303253   54140 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:01:32.303279   54140 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:32.303331   54140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:01:32.303385   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:32.353257   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:32.353356   54140 retry.go:31] will retry after 133.963573ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:32.488079   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:32.542395   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:32.542489   54140 retry.go:31] will retry after 400.85731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:32.945792   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:32.998412   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:32.998499   54140 retry.go:31] will retry after 617.27655ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:33.617467   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:33.673263   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:01:33.673363   54140 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:01:33.673377   54140 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:33.673392   54140 start.go:128] duration metric: createHost completed in 6m2.930861913s
	I1002 17:01:33.673475   54140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:01:33.673540   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:33.724236   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:33.724323   54140 retry.go:31] will retry after 181.518068ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:33.908280   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:33.961948   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:33.962077   54140 retry.go:31] will retry after 529.614547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:34.494047   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:34.549627   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:34.549739   54140 retry.go:31] will retry after 542.913913ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:35.093588   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:35.147954   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:01:35.148073   54140 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:01:35.148094   54140 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:35.148149   54140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:01:35.148227   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:35.198677   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:35.198765   54140 retry.go:31] will retry after 312.778599ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:35.513473   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:35.567930   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:35.568017   54140 retry.go:31] will retry after 481.544966ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:36.051926   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:36.106516   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:01:36.106601   54140 retry.go:31] will retry after 318.104458ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:36.427111   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:01:36.480945   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:01:36.481041   54140 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:01:36.481062   54140 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:36.481081   54140 fix.go:56] fixHost completed within 6m24.961682291s
	I1002 17:01:36.481089   54140 start.go:83] releasing machines lock for "multinode-053000", held for 6m24.961720577s
	W1002 17:01:36.481104   54140 start.go:688] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 17:01:36.481173   54140 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 17:01:36.481181   54140 start.go:703] Will try again in 5 seconds ...
	I1002 17:01:41.482218   54140 start.go:365] acquiring machines lock for multinode-053000: {Name:mk8edede740faa8024fd55789b3e24ceccf4bf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:01:41.482402   54140 start.go:369] acquired machines lock for "multinode-053000" in 146.22µs
	I1002 17:01:41.482444   54140 start.go:96] Skipping create...Using existing machine configuration
	I1002 17:01:41.482452   54140 fix.go:54] fixHost starting: 
	I1002 17:01:41.482894   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:41.537489   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:41.537531   54140 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:41.537556   54140 fix.go:107] machineExists: false. err=machine does not exist
	I1002 17:01:41.559226   54140 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1002 17:01:41.580121   54140 delete.go:124] DEMOLISHING multinode-053000 ...
	I1002 17:01:41.580333   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:41.632067   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 17:01:41.632107   54140 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:41.632127   54140 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:41.632489   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:41.682552   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:41.682601   54140 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:41.682691   54140 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 17:01:41.732062   54140 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 17:01:41.732097   54140 kic.go:367] could not find the container multinode-053000 to remove it. will try anyways
	I1002 17:01:41.732187   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:41.782085   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 17:01:41.782132   54140 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:41.782218   54140 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1002 17:01:41.832100   54140 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 17:01:41.832129   54140 oci.go:647] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:42.834543   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:42.887920   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:42.887964   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:42.887976   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:42.887997   54140 retry.go:31] will retry after 641.456111ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:43.531829   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:43.586329   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:43.586377   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:43.586389   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:43.586426   54140 retry.go:31] will retry after 1.087775738s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:44.676644   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:44.729972   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:44.730014   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:44.730030   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:44.730052   54140 retry.go:31] will retry after 1.343504985s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:46.074455   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:46.128862   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:46.128905   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:46.128916   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:46.128948   54140 retry.go:31] will retry after 2.095183638s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:48.225750   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:48.278179   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:48.278223   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:48.278239   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:48.278265   54140 retry.go:31] will retry after 2.742963549s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:51.022296   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:51.076514   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:51.076559   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:51.076570   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:51.076599   54140 retry.go:31] will retry after 5.415739763s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:56.494668   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:56.549120   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:56.549161   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:56.549173   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:56.549194   54140 retry.go:31] will retry after 3.006663971s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:59.557237   54140 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:01:59.608815   54140 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:01:59.608862   54140 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:01:59.608870   54140 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:01:59.608898   54140 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1002 17:01:59.608973   54140 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1002 17:01:59.659972   54140 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 17:01:59.711449   54140 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 17:01:59.711545   54140 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:01:59.762726   54140 cli_runner.go:164] Run: docker network rm multinode-053000
	I1002 17:01:59.867584   54140 fix.go:114] Sleeping 1 second for extra luck!
	I1002 17:02:00.869856   54140 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:02:00.914514   54140 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 17:02:00.914703   54140 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1002 17:02:00.914761   54140 client.go:168] LocalClient.Create starting
	I1002 17:02:00.914946   54140 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:02:00.915049   54140 main.go:141] libmachine: Decoding PEM data...
	I1002 17:02:00.915076   54140 main.go:141] libmachine: Parsing certificate...
	I1002 17:02:00.915164   54140 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:02:00.915229   54140 main.go:141] libmachine: Decoding PEM data...
	I1002 17:02:00.915260   54140 main.go:141] libmachine: Parsing certificate...
	I1002 17:02:00.915819   54140 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:02:01.014676   54140 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:02:01.014773   54140 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1002 17:02:01.014791   54140 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1002 17:02:01.065320   54140 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1002 17:02:01.065351   54140 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1002 17:02:01.065362   54140 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1002 17:02:01.065485   54140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:02:01.118062   54140 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:02:01.119692   54140 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:02:01.120053   54140 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000edfdd0}
	I1002 17:02:01.120068   54140 network_create.go:124] attempt to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1002 17:02:01.120139   54140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	W1002 17:02:01.171570   54140 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000 returned with exit code 1
	W1002 17:02:01.171606   54140 network_create.go:149] failed to create docker network multinode-053000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1002 17:02:01.171624   54140 network_create.go:116] failed to create docker network multinode-053000 192.168.67.0/24, will retry: subnet is taken
	I1002 17:02:01.173107   54140 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:02:01.173485   54140 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001210e10}
	I1002 17:02:01.173496   54140 network_create.go:124] attempt to create docker network multinode-053000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1002 17:02:01.173564   54140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1002 17:02:01.260366   54140 network_create.go:108] docker network multinode-053000 192.168.76.0/24 created
	I1002 17:02:01.260394   54140 kic.go:117] calculated static IP "192.168.76.2" for the "multinode-053000" container
	I1002 17:02:01.260511   54140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:02:01.312378   54140 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:02:01.362607   54140 oci.go:103] Successfully created a docker volume multinode-053000
	I1002 17:02:01.362719   54140 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:02:01.687188   54140 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1002 17:02:01.687213   54140 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:02:01.687225   54140 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:02:01.687323   54140 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 17:08:00.930014   54140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:08:00.930194   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:01.026397   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:01.026490   54140 retry.go:31] will retry after 224.511076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:01.253434   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:01.308521   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:01.308635   54140 retry.go:31] will retry after 306.757588ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:01.615967   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:01.672497   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:01.672590   54140 retry.go:31] will retry after 550.670232ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:02.225712   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:02.281133   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:02.281225   54140 retry.go:31] will retry after 480.496475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:02.762103   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:02.815448   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:08:02.815560   54140 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:08:02.815586   54140 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:02.815642   54140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:08:02.815702   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:02.866105   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:02.866199   54140 retry.go:31] will retry after 177.016846ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:03.044946   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:03.098981   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:03.099073   54140 retry.go:31] will retry after 313.785911ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:03.415203   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:03.469105   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:03.469196   54140 retry.go:31] will retry after 315.156316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:03.785582   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:03.838688   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:03.838785   54140 retry.go:31] will retry after 573.313578ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:04.414577   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:04.469532   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:08:04.469645   54140 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:08:04.469665   54140 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:04.469673   54140 start.go:128] duration metric: createHost completed in 6m3.585979635s
	I1002 17:08:04.469741   54140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 17:08:04.469795   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:04.520248   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:04.520334   54140 retry.go:31] will retry after 185.06235ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:04.705874   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:04.760384   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:04.760474   54140 retry.go:31] will retry after 494.427115ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:05.257306   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:05.311452   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:05.311554   54140 retry.go:31] will retry after 722.679399ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:06.034668   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:06.090560   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:08:06.090655   54140 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:08:06.090682   54140 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:06.090740   54140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 17:08:06.090791   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:06.141227   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:06.141312   54140 retry.go:31] will retry after 183.498281ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:06.327274   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:06.382205   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:06.382306   54140 retry.go:31] will retry after 377.668638ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:06.760345   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:06.813889   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	I1002 17:08:06.813991   54140 retry.go:31] will retry after 810.881533ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:07.627234   54140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000
	W1002 17:08:07.683485   54140 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000 returned with exit code 1
	W1002 17:08:07.683591   54140 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	W1002 17:08:07.683611   54140 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-053000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-053000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:07.683621   54140 fix.go:56] fixHost completed within 6m26.186413113s
	I1002 17:08:07.683629   54140 start.go:83] releasing machines lock for "multinode-053000", held for 6m26.186458839s
	W1002 17:08:07.683705   54140 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-053000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1002 17:08:07.727371   54140 out.go:177] 
	W1002 17:08:07.749473   54140 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1002 17:08:07.749545   54140 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1002 17:08:07.749675   54140 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1002 17:08:07.794169   54140 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-053000" : exit status 52
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-053000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "e4fa8ed05a160ad316054a2be2a9b5d9e3f51b5888a87053849585a3a0bcc41e",
	        "Created": "2023-10-03T00:02:01.220280467Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (93.732278ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:08:08.064802   54493 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (793.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 node delete m03: exit status 80 (189.238857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 node delete m03": exit status 80
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr: exit status 7 (94.380527ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:08:08.297041   54501 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:08:08.297333   54501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:08:08.297338   54501 out.go:309] Setting ErrFile to fd 2...
	I1002 17:08:08.297342   54501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:08:08.297517   54501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:08:08.297702   54501 out.go:303] Setting JSON to false
	I1002 17:08:08.297724   54501 mustload.go:65] Loading cluster: multinode-053000
	I1002 17:08:08.297773   54501 notify.go:220] Checking for updates...
	I1002 17:08:08.297994   54501 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 17:08:08.298008   54501 status.go:255] checking status of multinode-053000 ...
	I1002 17:08:08.298416   54501 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:08.348730   54501 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:08.348783   54501 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1002 17:08:08.348799   54501 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1002 17:08:08.348812   54501 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 17:08:08.348819   54501 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "e4fa8ed05a160ad316054a2be2a9b5d9e3f51b5888a87053849585a3a0bcc41e",
	        "Created": "2023-10-03T00:02:01.220280467Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (94.573204ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:08:08.498882   54507 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (14.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 stop: exit status 82 (13.828696819s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	* Stopping node "multinode-053000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-053000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-053000 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status: exit status 7 (95.471558ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:08:22.423738   54530 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 17:08:22.423749   54530 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr: exit status 7 (94.354116ms)

                                                
                                                
-- stdout --
	multinode-053000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:08:22.466362   54534 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:08:22.466664   54534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:08:22.466669   54534 out.go:309] Setting ErrFile to fd 2...
	I1002 17:08:22.466673   54534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:08:22.466868   54534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:08:22.467043   54534 out.go:303] Setting JSON to false
	I1002 17:08:22.467067   54534 mustload.go:65] Loading cluster: multinode-053000
	I1002 17:08:22.467105   54534 notify.go:220] Checking for updates...
	I1002 17:08:22.467367   54534 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 17:08:22.467381   54534 status.go:255] checking status of multinode-053000 ...
	I1002 17:08:22.467785   54534 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:22.518168   54534 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:22.518210   54534 status.go:330] multinode-053000 host status = "" (err=state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	)
	I1002 17:08:22.518231   54534 status.go:257] multinode-053000 status: &{Name:multinode-053000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1002 17:08:22.518246   54534 status.go:260] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	E1002 17:08:22.518253   54534 status.go:263] The "multinode-053000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-053000 status --alsologtostderr": multinode-053000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "e4fa8ed05a160ad316054a2be2a9b5d9e3f51b5888a87053849585a3a0bcc41e",
	        "Created": "2023-10-03T00:02:01.220280467Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (94.081064ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:08:22.667603   54540 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (14.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (130.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1002 17:10:21.478085   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m10.315783088s)

                                                
                                                
-- stdout --
	* [multinode-053000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-053000 in cluster multinode-053000
	* Pulling base image ...
	* docker "multinode-053000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 17:08:22.766695   54546 out.go:296] Setting OutFile to fd 1 ...
	I1002 17:08:22.766992   54546 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:08:22.767003   54546 out.go:309] Setting ErrFile to fd 2...
	I1002 17:08:22.767007   54546 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 17:08:22.767195   54546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 17:08:22.768936   54546 out.go:303] Setting JSON to false
	I1002 17:08:22.791191   54546 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":23871,"bootTime":1696267831,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 17:08:22.791289   54546 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 17:08:22.813168   54546 out.go:177] * [multinode-053000] minikube v1.31.2 on Darwin 14.0
	I1002 17:08:22.856778   54546 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 17:08:22.856879   54546 notify.go:220] Checking for updates...
	I1002 17:08:22.878979   54546 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 17:08:22.900720   54546 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 17:08:22.923580   54546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 17:08:22.944917   54546 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 17:08:22.966651   54546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 17:08:22.990579   54546 config.go:182] Loaded profile config "multinode-053000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 17:08:22.991341   54546 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 17:08:23.049491   54546 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 17:08:23.049630   54546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:08:23.151845   54546 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:110 SystemTime:2023-10-03 00:08:23.14099201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:08:23.174424   54546 out.go:177] * Using the docker driver based on existing profile
	I1002 17:08:23.216434   54546 start.go:298] selected driver: docker
	I1002 17:08:23.216454   54546 start.go:902] validating driver "docker" against &{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-053000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 17:08:23.216527   54546 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 17:08:23.216649   54546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 17:08:23.319226   54546 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:110 SystemTime:2023-10-03 00:08:23.308422313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker S
cout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 17:08:23.322330   54546 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 17:08:23.322366   54546 cni.go:84] Creating CNI manager for ""
	I1002 17:08:23.322375   54546 cni.go:136] 1 nodes found, recommending kindnet
	I1002 17:08:23.322387   54546 start_flags.go:321] config:
	{Name:multinode-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-053000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 17:08:23.343861   54546 out.go:177] * Starting control plane node multinode-053000 in cluster multinode-053000
	I1002 17:08:23.386723   54546 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 17:08:23.408659   54546 out.go:177] * Pulling base image ...
	I1002 17:08:23.452775   54546 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:08:23.452860   54546 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 17:08:23.452880   54546 cache.go:57] Caching tarball of preloaded images
	I1002 17:08:23.452875   54546 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 17:08:23.453080   54546 preload.go:174] Found /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 17:08:23.453102   54546 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 17:08:23.453260   54546 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/multinode-053000/config.json ...
	I1002 17:08:23.504601   54546 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 17:08:23.504642   54546 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 17:08:23.504663   54546 cache.go:195] Successfully downloaded all kic artifacts
	I1002 17:08:23.504710   54546 start.go:365] acquiring machines lock for multinode-053000: {Name:mk8edede740faa8024fd55789b3e24ceccf4bf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 17:08:23.504805   54546 start.go:369] acquired machines lock for "multinode-053000" in 75.363µs
	I1002 17:08:23.504825   54546 start.go:96] Skipping create...Using existing machine configuration
	I1002 17:08:23.504836   54546 fix.go:54] fixHost starting: 
	I1002 17:08:23.505065   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:23.554845   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:23.554898   54546 fix.go:102] recreateIfNeeded on multinode-053000: state= err=unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:23.554924   54546 fix.go:107] machineExists: false. err=machine does not exist
	I1002 17:08:23.577256   54546 out.go:177] * docker "multinode-053000" container is missing, will recreate.
	I1002 17:08:23.598794   54546 delete.go:124] DEMOLISHING multinode-053000 ...
	I1002 17:08:23.598983   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:23.650784   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 17:08:23.650835   54546 stop.go:75] unable to get state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:23.650857   54546 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:23.651210   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:23.701540   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:23.701592   54546 delete.go:82] Unable to get host status for multinode-053000, assuming it has already been deleted: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:23.701678   54546 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 17:08:23.752884   54546 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 17:08:23.752918   54546 kic.go:367] could not find the container multinode-053000 to remove it. will try anyways
	I1002 17:08:23.753004   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:23.803412   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	W1002 17:08:23.803456   54546 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:23.803541   54546 cli_runner.go:164] Run: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0"
	W1002 17:08:23.853575   54546 cli_runner.go:211] docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 17:08:23.853607   54546 oci.go:647] error shutdown multinode-053000: docker exec --privileged -t multinode-053000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:24.856187   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:24.909451   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:24.909502   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:24.909513   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:24.909546   54546 retry.go:31] will retry after 670.861183ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:25.582894   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:25.634283   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:25.634325   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:25.634335   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:25.634356   54546 retry.go:31] will retry after 563.355499ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:26.199494   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:26.255552   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:26.255595   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:26.255606   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:26.255626   54546 retry.go:31] will retry after 744.239616ms: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:27.001606   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:27.057128   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:27.057166   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:27.057187   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:27.057210   54546 retry.go:31] will retry after 1.657072442s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:28.714706   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:28.769157   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:28.769198   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:28.769219   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:28.769242   54546 retry.go:31] will retry after 3.492118017s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:32.262529   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:32.316220   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:32.316270   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:32.316282   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:32.316303   54546 retry.go:31] will retry after 4.952124206s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:37.269320   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:37.321989   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:37.322031   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:37.322051   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:37.322074   54546 retry.go:31] will retry after 3.773475286s: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:41.096499   54546 cli_runner.go:164] Run: docker container inspect multinode-053000 --format={{.State.Status}}
	W1002 17:08:41.153004   54546 cli_runner.go:211] docker container inspect multinode-053000 --format={{.State.Status}} returned with exit code 1
	I1002 17:08:41.153056   54546 oci.go:659] temporary error verifying shutdown: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	I1002 17:08:41.153072   54546 oci.go:661] temporary error: container multinode-053000 status is  but expect it to be exited
	I1002 17:08:41.153100   54546 oci.go:88] couldn't shut down multinode-053000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000
	 
	I1002 17:08:41.153170   54546 cli_runner.go:164] Run: docker rm -f -v multinode-053000
	I1002 17:08:41.206092   54546 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-053000
	W1002 17:08:41.257075   54546 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-053000 returned with exit code 1
	I1002 17:08:41.257183   54546 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:08:41.308892   54546 cli_runner.go:164] Run: docker network rm multinode-053000
	I1002 17:08:41.413709   54546 fix.go:114] Sleeping 1 second for extra luck!
	I1002 17:08:42.415896   54546 start.go:125] createHost starting for "" (driver="docker")
	I1002 17:08:42.439045   54546 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 17:08:42.439215   54546 start.go:159] libmachine.API.Create for "multinode-053000" (driver="docker")
	I1002 17:08:42.439314   54546 client.go:168] LocalClient.Create starting
	I1002 17:08:42.439485   54546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/ca.pem
	I1002 17:08:42.439567   54546 main.go:141] libmachine: Decoding PEM data...
	I1002 17:08:42.439602   54546 main.go:141] libmachine: Parsing certificate...
	I1002 17:08:42.439711   54546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17323-48076/.minikube/certs/cert.pem
	I1002 17:08:42.439773   54546 main.go:141] libmachine: Decoding PEM data...
	I1002 17:08:42.439793   54546 main.go:141] libmachine: Parsing certificate...
	I1002 17:08:42.440594   54546 cli_runner.go:164] Run: docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 17:08:42.492391   54546 cli_runner.go:211] docker network inspect multinode-053000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 17:08:42.492494   54546 network_create.go:281] running [docker network inspect multinode-053000] to gather additional debugging logs...
	I1002 17:08:42.492514   54546 cli_runner.go:164] Run: docker network inspect multinode-053000
	W1002 17:08:42.542473   54546 cli_runner.go:211] docker network inspect multinode-053000 returned with exit code 1
	I1002 17:08:42.542497   54546 network_create.go:284] error running [docker network inspect multinode-053000]: docker network inspect multinode-053000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-053000 not found
	I1002 17:08:42.542510   54546 network_create.go:286] output of [docker network inspect multinode-053000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-053000 not found
	
	** /stderr **
	I1002 17:08:42.542670   54546 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 17:08:42.594870   54546 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 17:08:42.595248   54546 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00101ac80}
	I1002 17:08:42.595264   54546 network_create.go:124] attempt to create docker network multinode-053000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1002 17:08:42.595343   54546 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-053000 multinode-053000
	I1002 17:08:42.682201   54546 network_create.go:108] docker network multinode-053000 192.168.58.0/24 created
	I1002 17:08:42.682235   54546 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-053000" container
	I1002 17:08:42.682355   54546 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 17:08:42.734787   54546 cli_runner.go:164] Run: docker volume create multinode-053000 --label name.minikube.sigs.k8s.io=multinode-053000 --label created_by.minikube.sigs.k8s.io=true
	I1002 17:08:42.799417   54546 oci.go:103] Successfully created a docker volume multinode-053000
	I1002 17:08:42.799537   54546 cli_runner.go:164] Run: docker run --rm --name multinode-053000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-053000 --entrypoint /usr/bin/test -v multinode-053000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 17:08:43.127193   54546 oci.go:107] Successfully prepared a docker volume multinode-053000
	I1002 17:08:43.127227   54546 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 17:08:43.127241   54546 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 17:08:43.127345   54546 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-053000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-053000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-053000
helpers_test.go:235: (dbg) docker inspect multinode-053000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-053000",
	        "Id": "e15bc259f8de02a72814b7ef23942a7466392fe11ec68c9bfe328fd7a3734f99",
	        "Created": "2023-10-03T00:08:42.641221188Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-053000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-053000 -n multinode-053000: exit status 7 (95.080043ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:10:33.200778   54647 status.go:249] status error: host: state: unknown state "multinode-053000": docker container inspect multinode-053000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-053000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-053000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (130.53s)

                                                
                                    
x
+
TestScheduledStopUnix (300.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-440000 --memory=2048 --driver=docker 
E1002 17:15:21.488509   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:15:33.520835   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:16:44.544109   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-440000 --memory=2048 --driver=docker : signal: killed (5m0.003957256s)

                                                
                                                
-- stdout --
	* [scheduled-stop-440000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-440000 in cluster scheduled-stop-440000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-440000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-440000 in cluster scheduled-stop-440000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-10-02 17:18:13.381111 -0700 PDT m=+4541.335040958
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-440000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-440000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-440000",
	        "Id": "2c1b58da02d1bdce64c6af3442d24973f2631e5828db4fd538a7c4dcf55a60d2",
	        "Created": "2023-10-03T00:13:14.408488357Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-440000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-440000 -n scheduled-stop-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-440000 -n scheduled-stop-440000: exit status 7 (95.283518ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:18:13.532786   55295 status.go:249] status error: host: state: unknown state "scheduled-stop-440000": docker container inspect scheduled-stop-440000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-440000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-440000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-440000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-440000
--- FAIL: TestScheduledStopUnix (300.87s)

                                                
                                    
x
+
TestSkaffold (300.88s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe111877633 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-661000 --memory=2600 --driver=docker 
E1002 17:20:21.498648   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:20:33.531068   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 17:21:56.589058   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-661000 --memory=2600 --driver=docker : signal: killed (4m57.900346745s)

                                                
                                                
-- stdout --
	* [skaffold-661000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-661000 in cluster skaffold-661000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-661000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-661000 in cluster skaffold-661000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-10-02 17:23:14.268465 -0700 PDT m=+4842.211675026
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-661000
helpers_test.go:235: (dbg) docker inspect skaffold-661000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-661000",
	        "Id": "537fe029d629db68a217ecc60d4107b967bd8d84730fea0e52855dc6e0093c4b",
	        "Created": "2023-10-03T00:18:17.375102744Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-661000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-661000 -n skaffold-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-661000 -n skaffold-661000: exit status 7 (94.547592ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 17:23:14.422167   55413 status.go:249] status error: host: state: unknown state "skaffold-661000": docker container inspect skaffold-661000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-661000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-661000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-661000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-661000
--- FAIL: TestSkaffold (300.88s)

                                                
                                    
x
+
TestInsufficientStorage (300.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-364000 --memory=2048 --output=json --wait=true --driver=docker 
E1002 17:25:21.508800   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 17:25:33.542056   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-364000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.002899676s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"af10bd51-7c5b-4275-853c-dc8df21cf367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-364000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"129da64a-faa2-4770-91f8-5a1cb46039cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17323"}}
	{"specversion":"1.0","id":"dce8d85a-a978-4b02-87e7-7310f2a80568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig"}}
	{"specversion":"1.0","id":"ab4d6e98-0690-4f71-b191-9dc5ddb89be8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"143bed99-436a-4453-9c23-761da04313c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b04e65cc-f7eb-4c9f-921f-0ce69cdbd8b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube"}}
	{"specversion":"1.0","id":"b94c99b0-6a33-4dd7-957d-330e10eb50d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"520758da-ddfe-40ed-9b10-b0e08848dcc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b176db82-12f8-4941-a67c-db2d9eb68d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9845bfe6-3e35-4585-b28d-d2a5c7c696a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fdaedce-c1af-4e85-846d-759c7d3fc748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"bd92a420-fd56-4d7d-81fc-cefccf7ec453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-364000 in cluster insufficient-storage-364000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"14d044ec-50d8-4159-9639-ba5211989c61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ef8e88c-a966-42c9-8c74-09535ac23099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-364000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-364000 --output=json --layout=cluster: context deadline exceeded (803ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-364000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-364000
--- FAIL: TestInsufficientStorage (300.72s)

                                                
                                    

Test pass (139/181)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.9
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.28.2/json-events 8
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.35
16 TestDownloadOnly/DeleteAll 0.64
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
18 TestDownloadOnlyKic 2.04
19 TestBinaryMirror 1.65
22 TestAddons/Setup 146.77
26 TestAddons/parallel/InspektorGadget 10.87
27 TestAddons/parallel/MetricsServer 6.23
28 TestAddons/parallel/HelmTiller 11.22
30 TestAddons/parallel/CSI 44.59
31 TestAddons/parallel/Headlamp 13.55
32 TestAddons/parallel/CloudSpanner 5.78
33 TestAddons/parallel/LocalPath 54.69
36 TestAddons/serial/GCPAuth/Namespaces 0.11
37 TestAddons/StoppedEnableDisable 11.75
45 TestHyperKitDriverInstallOrUpdate 6.09
48 TestErrorSpam/setup 22.17
49 TestErrorSpam/start 2.09
50 TestErrorSpam/status 1.21
51 TestErrorSpam/pause 1.7
52 TestErrorSpam/unpause 1.8
53 TestErrorSpam/stop 11.5
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 35.95
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 38.1
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/CacheCmd/cache/add_remote 5.2
65 TestFunctional/serial/CacheCmd/cache/add_local 1.85
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
67 TestFunctional/serial/CacheCmd/cache/list 0.07
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
69 TestFunctional/serial/CacheCmd/cache/cache_reload 2.41
70 TestFunctional/serial/CacheCmd/cache/delete 0.14
71 TestFunctional/serial/MinikubeKubectlCmd 0.55
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.75
73 TestFunctional/serial/ExtraConfig 41.74
74 TestFunctional/serial/ComponentHealth 0.06
75 TestFunctional/serial/LogsCmd 3.17
76 TestFunctional/serial/LogsFileCmd 3.4
77 TestFunctional/serial/InvalidService 4.58
79 TestFunctional/parallel/ConfigCmd 0.44
80 TestFunctional/parallel/DashboardCmd 17.83
81 TestFunctional/parallel/DryRun 1.63
82 TestFunctional/parallel/InternationalLanguage 0.73
83 TestFunctional/parallel/StatusCmd 1.21
88 TestFunctional/parallel/AddonsCmd 0.23
89 TestFunctional/parallel/PersistentVolumeClaim 28.02
91 TestFunctional/parallel/SSHCmd 0.78
92 TestFunctional/parallel/CpCmd 1.94
93 TestFunctional/parallel/MySQL 38.13
94 TestFunctional/parallel/FileSync 0.45
95 TestFunctional/parallel/CertSync 2.66
99 TestFunctional/parallel/NodeLabels 0.09
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
103 TestFunctional/parallel/License 0.53
104 TestFunctional/parallel/Version/short 0.09
105 TestFunctional/parallel/Version/components 0.97
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.37
111 TestFunctional/parallel/ImageCommands/Setup 3
112 TestFunctional/parallel/DockerEnv/bash 2.11
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.61
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.52
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.09
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.9
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.75
123 TestFunctional/parallel/ServiceCmd/DeployApp 16.19
124 TestFunctional/parallel/ServiceCmd/List 0.43
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
126 TestFunctional/parallel/ServiceCmd/HTTPS 15
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.21
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
138 TestFunctional/parallel/ServiceCmd/Format 15
139 TestFunctional/parallel/ServiceCmd/URL 15
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
141 TestFunctional/parallel/ProfileCmd/profile_list 0.47
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
143 TestFunctional/parallel/MountCmd/any-port 8.44
144 TestFunctional/parallel/MountCmd/specific-port 2.36
145 TestFunctional/parallel/MountCmd/VerifyCleanup 3.01
146 TestFunctional/delete_addon-resizer_images 0.2
147 TestFunctional/delete_my-image_image 0.05
148 TestFunctional/delete_minikube_cached_images 0.06
152 TestImageBuild/serial/Setup 22.7
153 TestImageBuild/serial/NormalBuild 1.87
154 TestImageBuild/serial/BuildWithBuildArg 1.04
155 TestImageBuild/serial/BuildWithDockerIgnore 0.86
156 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
166 TestJSONOutput/start/Command 37.86
167 TestJSONOutput/start/Audit 0
169 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Command 0.61
173 TestJSONOutput/pause/Audit 0
175 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Command 0.66
179 TestJSONOutput/unpause/Audit 0
181 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/stop/Command 11
185 TestJSONOutput/stop/Audit 0
187 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
189 TestErrorJSONOutput 0.74
191 TestKicCustomNetwork/create_custom_network 24.8
192 TestKicCustomNetwork/use_default_bridge_network 25.39
193 TestKicExistingNetwork 24.68
194 TestKicCustomSubnet 24.69
195 TestKicStaticIP 25.22
196 TestMainNoArgs 0.07
197 TestMinikubeProfile 52.68
200 TestMountStart/serial/StartWithMountFirst 7.72
201 TestMountStart/serial/VerifyMountFirst 0.37
202 TestMountStart/serial/StartWithMountSecond 7.85
221 TestPreload 159.29
242 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.95
243 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 8.65
x
+
TestDownloadOnly/v1.16.0/json-events (8.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-511000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-511000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (8.896335342s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-511000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-511000: exit status 85 (302.58413ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-511000 | jenkins | v1.31.2 | 02 Oct 23 16:02 PDT |          |
	|         | -p download-only-511000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 16:02:32
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 16:02:32.089581   48558 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:02:32.089882   48558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:02:32.089887   48558 out.go:309] Setting ErrFile to fd 2...
	I1002 16:02:32.089891   48558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:02:32.090066   48558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	W1002 16:02:32.090170   48558 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17323-48076/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17323-48076/.minikube/config/config.json: no such file or directory
	I1002 16:02:32.091837   48558 out.go:303] Setting JSON to true
	I1002 16:02:32.115032   48558 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":19921,"bootTime":1696267831,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:02:32.115135   48558 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:02:32.136972   48558 out.go:97] [download-only-511000] minikube v1.31.2 on Darwin 14.0
	I1002 16:02:32.157510   48558 out.go:169] MINIKUBE_LOCATION=17323
	W1002 16:02:32.137204   48558 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 16:02:32.137200   48558 notify.go:220] Checking for updates...
	I1002 16:02:32.203749   48558 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:02:32.224867   48558 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:02:32.246939   48558 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:02:32.269051   48558 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	W1002 16:02:32.311884   48558 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 16:02:32.312384   48558 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 16:02:32.371404   48558 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:02:32.371540   48558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:02:32.478296   48558 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-02 23:02:32.466588175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:02:32.499827   48558 out.go:97] Using the docker driver based on user configuration
	I1002 16:02:32.499868   48558 start.go:298] selected driver: docker
	I1002 16:02:32.499904   48558 start.go:902] validating driver "docker" against <nil>
	I1002 16:02:32.500113   48558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:02:32.607036   48558 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-02 23:02:32.594300737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:02:32.607203   48558 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 16:02:32.610848   48558 start_flags.go:384] Using suggested 5891MB memory alloc based on sys=32768MB, container=5939MB
	I1002 16:02:32.611021   48558 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 16:02:32.631972   48558 out.go:169] Using Docker Desktop driver with root privileges
	I1002 16:02:32.652992   48558 cni.go:84] Creating CNI manager for ""
	I1002 16:02:32.653030   48558 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 16:02:32.653083   48558 start_flags.go:321] config:
	{Name:download-only-511000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-511000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:02:32.675157   48558 out.go:97] Starting control plane node download-only-511000 in cluster download-only-511000
	I1002 16:02:32.675199   48558 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 16:02:32.696944   48558 out.go:97] Pulling base image ...
	I1002 16:02:32.697035   48558 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 16:02:32.697144   48558 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 16:02:32.748665   48558 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 16:02:32.749183   48558 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 16:02:32.749297   48558 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 16:02:32.756345   48558 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1002 16:02:32.756363   48558 cache.go:57] Caching tarball of preloaded images
	I1002 16:02:32.756562   48558 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 16:02:32.777986   48558 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 16:02:32.777995   48558 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1002 16:02:32.855962   48558 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-511000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-511000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-511000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker : (8.000198092s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (8.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-511000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-511000: exit status 85 (347.422347ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-511000 | jenkins | v1.31.2 | 02 Oct 23 16:02 PDT |          |
	|         | -p download-only-511000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-511000 | jenkins | v1.31.2 | 02 Oct 23 16:02 PDT |          |
	|         | -p download-only-511000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 16:02:41
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 16:02:41.292932   48596 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:02:41.293218   48596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:02:41.293224   48596 out.go:309] Setting ErrFile to fd 2...
	I1002 16:02:41.293228   48596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:02:41.293404   48596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	W1002 16:02:41.293498   48596 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17323-48076/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17323-48076/.minikube/config/config.json: no such file or directory
	I1002 16:02:41.294707   48596 out.go:303] Setting JSON to true
	I1002 16:02:41.316326   48596 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":19930,"bootTime":1696267831,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:02:41.316417   48596 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:02:41.337664   48596 out.go:97] [download-only-511000] minikube v1.31.2 on Darwin 14.0
	I1002 16:02:41.359351   48596 out.go:169] MINIKUBE_LOCATION=17323
	I1002 16:02:41.337919   48596 notify.go:220] Checking for updates...
	I1002 16:02:41.402597   48596 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:02:41.424189   48596 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:02:41.445619   48596 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:02:41.466693   48596 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	W1002 16:02:41.509585   48596 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 16:02:41.510267   48596 config.go:182] Loaded profile config "download-only-511000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1002 16:02:41.510347   48596 start.go:810] api.Load failed for download-only-511000: filestore "download-only-511000": Docker machine "download-only-511000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 16:02:41.510492   48596 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 16:02:41.510529   48596 start.go:810] api.Load failed for download-only-511000: filestore "download-only-511000": Docker machine "download-only-511000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 16:02:41.568062   48596 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:02:41.568195   48596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:02:41.672000   48596 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-02 23:02:41.65959564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfine
d name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages
Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sco
ut Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:02:41.693694   48596 out.go:97] Using the docker driver based on existing profile
	I1002 16:02:41.693751   48596 start.go:298] selected driver: docker
	I1002 16:02:41.693763   48596 start.go:902] validating driver "docker" against &{Name:download-only-511000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-511000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:02:41.694027   48596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:02:41.803052   48596 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:64 SystemTime:2023-10-02 23:02:41.788438153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:02:41.806192   48596 cni.go:84] Creating CNI manager for ""
	I1002 16:02:41.806217   48596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 16:02:41.806233   48596 start_flags.go:321] config:
	{Name:download-only-511000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-511000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:02:41.827531   48596 out.go:97] Starting control plane node download-only-511000 in cluster download-only-511000
	I1002 16:02:41.827572   48596 cache.go:122] Beginning downloading kic base image for docker with docker
	I1002 16:02:41.848808   48596 out.go:97] Pulling base image ...
	I1002 16:02:41.848919   48596 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:02:41.849008   48596 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 16:02:41.901363   48596 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 16:02:41.901588   48596 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 16:02:41.901615   48596 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I1002 16:02:41.901620   48596 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I1002 16:02:41.901629   48596 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 16:02:41.906398   48596 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 16:02:41.906409   48596 cache.go:57] Caching tarball of preloaded images
	I1002 16:02:41.906874   48596 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 16:02:41.927729   48596 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 16:02:41.927770   48596 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 16:02:42.016922   48596 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /Users/jenkins/minikube-integration/17323-48076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-511000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-511000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.04s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-735000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-735000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-735000
--- PASS: TestDownloadOnlyKic (2.04s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-690000 --alsologtostderr --binary-mirror http://127.0.0.1:55984 --driver=docker 
aaa_download_only_test.go:304: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-690000 --alsologtostderr --binary-mirror http://127.0.0.1:55984 --driver=docker : (1.035922626s)
helpers_test.go:175: Cleaning up "binary-mirror-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-690000
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestAddons/Setup (146.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-129000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p addons-129000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.771342037s)
--- PASS: TestAddons/Setup (146.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xlmmr" [aa54a2fe-9328-4796-af6d-ef5940ce1343] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012875351s
addons_test.go:819: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-129000
addons_test.go:819: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-129000: (5.855090086s)
--- PASS: TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 4.887865ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-mfw4w" [bea0172a-466e-4ac0-8290-b138fcb91129] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016531969s
addons_test.go:393: (dbg) Run:  kubectl --context addons-129000 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:410: (dbg) Done: out/minikube-darwin-amd64 -p addons-129000 addons disable metrics-server --alsologtostderr -v=1: (1.144338437s)
--- PASS: TestAddons/parallel/MetricsServer (6.23s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 4.173075ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mxlqm" [f41ccc1e-10f2-4015-99bd-ffb793158426] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.019957286s
addons_test.go:451: (dbg) Run:  kubectl --context addons-129000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-129000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.396442234s)
addons_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 4.524617ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-129000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-129000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f0dc701b-c0dc-4e78-9acd-c6dbfb222884] Pending
helpers_test.go:344: "task-pv-pod" [f0dc701b-c0dc-4e78-9acd-c6dbfb222884] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f0dc701b-c0dc-4e78-9acd-c6dbfb222884] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012745913s
addons_test.go:562: (dbg) Run:  kubectl --context addons-129000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-129000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-129000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-129000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-129000 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-129000 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-129000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-129000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [33aaa995-0f7d-44ec-a157-90167c50d1bf] Pending
helpers_test.go:344: "task-pv-pod-restore" [33aaa995-0f7d-44ec-a157-90167c50d1bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [33aaa995-0f7d-44ec-a157-90167c50d1bf] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.013920811s
addons_test.go:604: (dbg) Run:  kubectl --context addons-129000 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-129000 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-129000 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-darwin-amd64 -p addons-129000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.28746069s)
addons_test.go:620: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:620: (dbg) Done: out/minikube-darwin-amd64 -p addons-129000 addons disable volumesnapshots --alsologtostderr -v=1: (1.094734824s)
--- PASS: TestAddons/parallel/CSI (44.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-129000 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-129000 --alsologtostderr -v=1: (1.530195177s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-kv74q" [14a03257-20e2-48f2-9e73-b4b432c11c28] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-kv74q" [14a03257-20e2-48f2-9e73-b4b432c11c28] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.015870117s
--- PASS: TestAddons/parallel/Headlamp (13.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-knwh8" [194dd27e-f6e6-4019-b196-321127f46868] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0085825s
addons_test.go:838: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-129000
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-129000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-129000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [14e4374a-c46c-4f59-93c4-2f61bbf5a4c4] Pending
helpers_test.go:344: "test-local-path" [14e4374a-c46c-4f59-93c4-2f61bbf5a4c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [14e4374a-c46c-4f59-93c4-2f61bbf5a4c4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [14e4374a-c46c-4f59-93c4-2f61bbf5a4c4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.010137281s
addons_test.go:869: (dbg) Run:  kubectl --context addons-129000 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 ssh "cat /opt/local-path-provisioner/pvc-b09878c8-645a-4348-890a-86943732db06_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-129000 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-129000 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-darwin-amd64 -p addons-129000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.283406582s)
--- PASS: TestAddons/parallel/LocalPath (54.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-129000 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-129000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-129000
addons_test.go:150: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-129000: (11.075270697s)
addons_test.go:154: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-129000
addons_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-129000
addons_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-129000
--- PASS: TestAddons/StoppedEnableDisable (11.75s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.09s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.09s)

                                                
                                    
x
+
TestErrorSpam/setup (22.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-800000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-800000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 --driver=docker : (22.165289945s)
--- PASS: TestErrorSpam/setup (22.17s)

                                                
                                    
x
+
TestErrorSpam/start (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 start --dry-run
--- PASS: TestErrorSpam/start (2.09s)

                                                
                                    
x
+
TestErrorSpam/status (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 status
--- PASS: TestErrorSpam/status (1.21s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (11.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 stop: (10.887258101s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-800000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-800000 stop
--- PASS: TestErrorSpam/stop (11.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17323-48076/.minikube/files/etc/test/nested/copy/48556/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-165000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-165000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (35.949919542s)
--- PASS: TestFunctional/serial/StartWithProxy (35.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-165000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-165000 --alsologtostderr -v=8: (38.097305571s)
functional_test.go:659: soft start took 38.097770093s for "functional-165000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-165000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 cache add registry.k8s.io/pause:3.1: (1.854294058s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 cache add registry.k8s.io/pause:3.3: (1.762008531s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 cache add registry.k8s.io/pause:latest: (1.584722181s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local253751225/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cache add minikube-local-cache-test:functional-165000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 cache add minikube-local-cache-test:functional-165000: (1.209372679s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cache delete minikube-local-cache-test:functional-165000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-165000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (395.410577ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 cache reload: (1.197189033s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 kubectl -- --context functional-165000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-165000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.75s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-165000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-165000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.740968495s)
functional_test.go:757: restart took 41.741146092s for "functional-165000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-165000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 logs: (3.168729992s)
--- PASS: TestFunctional/serial/LogsCmd (3.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd223098680/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd223098680/001/logs.txt: (3.396430307s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-165000 apply -f testdata/invalidsvc.yaml
E1002 16:10:21.476547   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:21.483838   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:21.494298   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:21.514521   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:21.554768   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:21.636957   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:21.797080   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:22.117659   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:22.757903   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
E1002 16:10:24.040056   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-165000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-165000: exit status 115 (566.060784ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31692 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-165000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 config get cpus: exit status 14 (48.100878ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 config get cpus: exit status 14 (46.069416ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-165000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-165000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50972: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-165000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-165000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (911.019169ms)

                                                
                                                
-- stdout --
	* [functional-165000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:12:00.795120   50899 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:12:00.795390   50899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:12:00.795395   50899 out.go:309] Setting ErrFile to fd 2...
	I1002 16:12:00.795399   50899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:12:00.795573   50899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:12:00.796968   50899 out.go:303] Setting JSON to false
	I1002 16:12:00.820951   50899 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":20489,"bootTime":1696267831,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:12:00.821089   50899 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:12:00.844827   50899 out.go:177] * [functional-165000] minikube v1.31.2 on Darwin 14.0
	I1002 16:12:00.924773   50899 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 16:12:00.886695   50899 notify.go:220] Checking for updates...
	I1002 16:12:00.982452   50899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:12:01.045463   50899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:12:01.124500   50899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:12:01.182837   50899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 16:12:01.224636   50899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 16:12:01.246196   50899 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:12:01.246726   50899 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 16:12:01.308726   50899 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:12:01.308870   50899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:12:01.436698   50899 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-02 23:12:01.418283205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:12:01.461242   50899 out.go:177] * Using the docker driver based on existing profile
	I1002 16:12:01.497022   50899 start.go:298] selected driver: docker
	I1002 16:12:01.497038   50899 start.go:902] validating driver "docker" against &{Name:functional-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-165000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:12:01.497113   50899 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 16:12:01.522152   50899 out.go:177] 
	W1002 16:12:01.543408   50899 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 16:12:01.580072   50899 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-165000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-165000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-165000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (727.867037ms)

                                                
                                                
-- stdout --
	* [functional-165000] minikube v1.31.2 sur Darwin 14.0
	  - MINIKUBE_LOCATION=17323
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 16:12:02.407632   50943 out.go:296] Setting OutFile to fd 1 ...
	I1002 16:12:02.407824   50943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:12:02.407829   50943 out.go:309] Setting ErrFile to fd 2...
	I1002 16:12:02.407834   50943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 16:12:02.408045   50943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
	I1002 16:12:02.409626   50943 out.go:303] Setting JSON to false
	I1002 16:12:02.433548   50943 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":20491,"bootTime":1696267831,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 16:12:02.433648   50943 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 16:12:02.455492   50943 out.go:177] * [functional-165000] minikube v1.31.2 sur Darwin 14.0
	I1002 16:12:02.518419   50943 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 16:12:02.497525   50943 notify.go:220] Checking for updates...
	I1002 16:12:02.560438   50943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
	I1002 16:12:02.581326   50943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 16:12:02.602348   50943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 16:12:02.623215   50943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube
	I1002 16:12:02.665340   50943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 16:12:02.686522   50943 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 16:12:02.686930   50943 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 16:12:02.751259   50943 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
	I1002 16:12:02.751423   50943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 16:12:02.877421   50943 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-02 23:12:02.863998035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1002 16:12:02.900632   50943 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1002 16:12:02.942444   50943 start.go:298] selected driver: docker
	I1002 16:12:02.942458   50943 start.go:902] validating driver "docker" against &{Name:functional-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-165000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 16:12:02.942525   50943 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 16:12:02.967383   50943 out.go:177] 
	W1002 16:12:03.004233   50943 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 16:12:03.025346   50943 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ef286768-f08e-4d65-9ff7-c41bf9e6e945] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015432694s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-165000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-165000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-165000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-165000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a2e6d6b1-75f6-483d-bfa6-2bc50e73a3e6] Pending
helpers_test.go:344: "sp-pod" [a2e6d6b1-75f6-483d-bfa6-2bc50e73a3e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a2e6d6b1-75f6-483d-bfa6-2bc50e73a3e6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.012595878s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-165000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-165000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-165000 delete -f testdata/storage-provisioner/pod.yaml: (1.270808538s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-165000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ace48c2-92d0-40ba-8af1-174d3d1a72e1] Pending
helpers_test.go:344: "sp-pod" [3ace48c2-92d0-40ba-8af1-174d3d1a72e1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ace48c2-92d0-40ba-8af1-174d3d1a72e1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.015935904s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-165000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh -n functional-165000 "sudo cat /home/docker/cp-test.txt"
E1002 16:10:26.600701   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 cp functional-165000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3758416338/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh -n functional-165000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (38.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-165000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-gf8zb" [c38d6bac-7528-44aa-b853-abcd0be89fed] Pending
helpers_test.go:344: "mysql-859648c796-gf8zb" [c38d6bac-7528-44aa-b853-abcd0be89fed] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-gf8zb" [c38d6bac-7528-44aa-b853-abcd0be89fed] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.025035613s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-165000 exec mysql-859648c796-gf8zb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-165000 exec mysql-859648c796-gf8zb -- mysql -ppassword -e "show databases;": exit status 1 (154.783309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-165000 exec mysql-859648c796-gf8zb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-165000 exec mysql-859648c796-gf8zb -- mysql -ppassword -e "show databases;": exit status 1 (126.424156ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-165000 exec mysql-859648c796-gf8zb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (38.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/48556/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /etc/test/nested/copy/48556/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/48556.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /etc/ssl/certs/48556.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/48556.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /usr/share/ca-certificates/48556.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/485562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /etc/ssl/certs/485562.pem"
E1002 16:10:31.721043   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/485562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /usr/share/ca-certificates/485562.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-165000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh "sudo systemctl is-active crio": exit status 1 (510.233022ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-165000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-165000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-165000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-165000 image ls --format short --alsologtostderr:
I1002 16:12:14.526011   51200 out.go:296] Setting OutFile to fd 1 ...
I1002 16:12:14.526236   51200 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:14.526241   51200 out.go:309] Setting ErrFile to fd 2...
I1002 16:12:14.526245   51200 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:14.526438   51200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
I1002 16:12:14.527067   51200 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:14.527160   51200 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:14.527636   51200 cli_runner.go:164] Run: docker container inspect functional-165000 --format={{.State.Status}}
I1002 16:12:14.587464   51200 ssh_runner.go:195] Run: systemctl --version
I1002 16:12:14.587571   51200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-165000
I1002 16:12:14.648272   51200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56583 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/functional-165000/id_rsa Username:docker}
I1002 16:12:14.767184   51200 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-165000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-165000 | 40188bf320940 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | d571254277f6a | 42.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-165000 | ad046319b9199 | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-165000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-165000 image ls --format table --alsologtostderr:
I1002 16:12:18.903974   51248 out.go:296] Setting OutFile to fd 1 ...
I1002 16:12:18.904420   51248 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:18.904447   51248 out.go:309] Setting ErrFile to fd 2...
I1002 16:12:18.904452   51248 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:18.904730   51248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
I1002 16:12:18.905339   51248 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:18.905436   51248 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:18.905819   51248 cli_runner.go:164] Run: docker container inspect functional-165000 --format={{.State.Status}}
I1002 16:12:18.960180   51248 ssh_runner.go:195] Run: systemctl --version
I1002 16:12:18.960252   51248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-165000
I1002 16:12:19.013889   51248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56583 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/functional-165000/id_rsa Username:docker}
I1002 16:12:19.105043   51248 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/10/02 16:12:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-165000 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"40188bf32094068aa65f1abab5fbfaee0c83b901d517cb188dddd31198661fad","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-165000"],"size":"30"},{"id":"61395b4c586da2b9b3b7c
a903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-165000"],"size":"32900000"},{"id":"ad046319b9199109c88423aabebcfb45ff917
3f3bd38e6c6a65ea1cda14f255b","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-165000"],"size":"1240000"},{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["r
egistry.k8s.io/pause:3.1"],"size":"742000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-165000 image ls --format json --alsologtostderr:
I1002 16:12:18.611014   51242 out.go:296] Setting OutFile to fd 1 ...
I1002 16:12:18.611400   51242 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:18.611406   51242 out.go:309] Setting ErrFile to fd 2...
I1002 16:12:18.611410   51242 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:18.611639   51242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
I1002 16:12:18.612224   51242 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:18.612325   51242 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:18.612725   51242 cli_runner.go:164] Run: docker container inspect functional-165000 --format={{.State.Status}}
I1002 16:12:18.668419   51242 ssh_runner.go:195] Run: systemctl --version
I1002 16:12:18.668493   51242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-165000
I1002 16:12:18.722256   51242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56583 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/functional-165000/id_rsa Username:docker}
I1002 16:12:18.812094   51242 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-165000 image ls --format yaml --alsologtostderr:
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-165000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 40188bf32094068aa65f1abab5fbfaee0c83b901d517cb188dddd31198661fad
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-165000
size: "30"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-165000 image ls --format yaml --alsologtostderr:
I1002 16:12:14.899845   51206 out.go:296] Setting OutFile to fd 1 ...
I1002 16:12:14.900278   51206 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:14.900290   51206 out.go:309] Setting ErrFile to fd 2...
I1002 16:12:14.900303   51206 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:14.900583   51206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
I1002 16:12:14.901489   51206 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:14.901682   51206 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:14.902371   51206 cli_runner.go:164] Run: docker container inspect functional-165000 --format={{.State.Status}}
I1002 16:12:14.966520   51206 ssh_runner.go:195] Run: systemctl --version
I1002 16:12:14.966658   51206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-165000
I1002 16:12:15.026880   51206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56583 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/functional-165000/id_rsa Username:docker}
I1002 16:12:15.146648   51206 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh pgrep buildkitd: exit status 1 (508.294091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image build -t localhost/my-image:functional-165000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image build -t localhost/my-image:functional-165000 testdata/build --alsologtostderr: (2.565301778s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-165000 image build -t localhost/my-image:functional-165000 testdata/build --alsologtostderr:
I1002 16:12:15.770403   51230 out.go:296] Setting OutFile to fd 1 ...
I1002 16:12:15.771665   51230 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:15.771682   51230 out.go:309] Setting ErrFile to fd 2...
I1002 16:12:15.771694   51230 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 16:12:15.772106   51230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17323-48076/.minikube/bin
I1002 16:12:15.773385   51230 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:15.774565   51230 config.go:182] Loaded profile config "functional-165000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 16:12:15.775548   51230 cli_runner.go:164] Run: docker container inspect functional-165000 --format={{.State.Status}}
I1002 16:12:15.844417   51230 ssh_runner.go:195] Run: systemctl --version
I1002 16:12:15.844499   51230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-165000
I1002 16:12:15.902266   51230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56583 SSHKeyPath:/Users/jenkins/minikube-integration/17323-48076/.minikube/machines/functional-165000/id_rsa Username:docker}
I1002 16:12:16.004518   51230 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1862072501.tar
I1002 16:12:16.004638   51230 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 16:12:16.020465   51230 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1862072501.tar
I1002 16:12:16.027813   51230 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1862072501.tar: stat -c "%s %y" /var/lib/minikube/build/build.1862072501.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1862072501.tar': No such file or directory
I1002 16:12:16.027866   51230 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1862072501.tar --> /var/lib/minikube/build/build.1862072501.tar (3072 bytes)
I1002 16:12:16.107143   51230 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1862072501
I1002 16:12:16.119929   51230 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1862072501 -xf /var/lib/minikube/build/build.1862072501.tar
I1002 16:12:16.133055   51230 docker.go:340] Building image: /var/lib/minikube/build/build.1862072501
I1002 16:12:16.133147   51230 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-165000 /var/lib/minikube/build/build.1862072501
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ad046319b9199109c88423aabebcfb45ff9173f3bd38e6c6a65ea1cda14f255b done
#8 naming to localhost/my-image:functional-165000 done
#8 DONE 0.0s
I1002 16:12:18.229608   51230 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-165000 /var/lib/minikube/build/build.1862072501: (2.096332234s)
I1002 16:12:18.229679   51230 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1862072501
I1002 16:12:18.240387   51230 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1862072501.tar
I1002 16:12:18.249613   51230 build_images.go:207] Built localhost/my-image:functional-165000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1862072501.tar
I1002 16:12:18.249638   51230 build_images.go:123] succeeded building to: functional-165000
I1002 16:12:18.249642   51230 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.918015261s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-165000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-165000 docker-env) && out/minikube-darwin-amd64 status -p functional-165000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-165000 docker-env) && out/minikube-darwin-amd64 status -p functional-165000": (1.305233725s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-165000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image load --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image load --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr: (4.316577038s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image load --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image load --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr: (2.488403015s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.631385647s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-165000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image load --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr
E1002 16:10:41.961447   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image load --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr: (5.486766765s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image save gcr.io/google-containers/addon-resizer:functional-165000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image save gcr.io/google-containers/addon-resizer:functional-165000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.0938282s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image rm gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.582287283s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-165000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 image save --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 image save --daemon gcr.io/google-containers/addon-resizer:functional-165000 --alsologtostderr: (1.626154818s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-165000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-165000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-165000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rt8wp" [e237ed48-c235-4e63-b262-a668132188d8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1002 16:11:02.442356   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-d7447cc7f-rt8wp" [e237ed48-c235-4e63-b262-a668132188d8] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.055632193s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 service list -o json
functional_test.go:1493: Took "446.040922ms" to run "out/minikube-darwin-amd64 -p functional-165000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 service --namespace=default --https --url hello-node: signal: killed (15.002268949s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56823

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:56823
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-165000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-165000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-165000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 50681: os: process already finished
helpers_test.go:508: unable to kill pid 50666: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-165000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-165000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-165000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5c72d0e8-eb95-4b4a-8308-c06f1a865f84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5c72d0e8-eb95-4b4a-8308-c06f1a865f84] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.01384248s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-165000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-165000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 50695: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 service hello-node --url --format={{.IP}}: signal: killed (15.003951662s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 service hello-node --url
E1002 16:11:43.404714   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/addons-129000/client.crt: no such file or directory
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 service hello-node --url: signal: killed (15.002691248s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56894

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:56894
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "399.823265ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "65.975741ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "402.32952ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "66.955354ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3989858051/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696288318753897000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3989858051/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696288318753897000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3989858051/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696288318753897000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3989858051/001/test-1696288318753897000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.08288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 23:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 23:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 23:11 test-1696288318753897000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh cat /mount-9p/test-1696288318753897000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-165000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [11541938-7b80-4771-9e99-a9f5e29ef114] Pending
helpers_test.go:344: "busybox-mount" [11541938-7b80-4771-9e99-a9f5e29ef114] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [11541938-7b80-4771-9e99-a9f5e29ef114] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [11541938-7b80-4771-9e99-a9f5e29ef114] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.019966831s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-165000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3989858051/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1504046244/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (551.590043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1504046244/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh "sudo umount -f /mount-9p": exit status 1 (362.567528ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-165000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1504046244/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2908191926/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2908191926/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2908191926/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T" /mount1: exit status 1 (629.0231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T" /mount1: (1.058720796s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-165000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-165000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2908191926/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2908191926/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-165000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2908191926/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.01s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-165000
--- PASS: TestFunctional/delete_addon-resizer_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-165000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-165000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-464000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-464000 --driver=docker : (22.70483646s)
--- PASS: TestImageBuild/serial/Setup (22.70s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-464000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-464000: (1.872403361s)
--- PASS: TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-464000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-464000: (1.035804934s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-464000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-464000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-376000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1002 16:20:33.528756   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
E1002 16:21:01.221101   48556 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17323-48076/.minikube/profiles/functional-165000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-376000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (37.858578774s)
--- PASS: TestJSONOutput/start/Command (37.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-376000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-376000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-376000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-376000 --output=json --user=testUser: (11.001846924s)
--- PASS: TestJSONOutput/stop/Command (11.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-199000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-199000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (371.146688ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"245ab1cf-5eae-48af-9c63-10cef268c9d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-199000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"848a60eb-86bd-4d5f-8b9d-fcc2fed8afae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17323"}}
	{"specversion":"1.0","id":"d20b25df-a939-412a-bff2-c6278f3348e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig"}}
	{"specversion":"1.0","id":"1fec832d-4992-49cd-9716-5fd639a567f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"160a9190-8033-4383-98f2-5867c553d15f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b4ca9d7-763d-47eb-a2f5-2ab6fa482fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17323-48076/.minikube"}}
	{"specversion":"1.0","id":"63bc2ae6-df0f-4b76-8469-fa2ad1b80021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a00ff706-6b41-4125-874f-e8b309fcb44e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-199000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-199000
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-740000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-740000 --network=: (22.258884717s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-740000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-740000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-740000: (2.480656344s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.80s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-978000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-978000 --network=bridge: (23.016663546s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-978000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-978000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-978000: (2.317272768s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.39s)

                                                
                                    
x
+
TestKicExistingNetwork (24.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-979000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-979000 --network=existing-network: (22.02168106s)
helpers_test.go:175: Cleaning up "existing-network-979000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-979000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-979000: (2.308951119s)
--- PASS: TestKicExistingNetwork (24.68s)

                                                
                                    
x
+
TestKicCustomSubnet (24.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-431000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-431000 --subnet=192.168.60.0/24: (22.149994659s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-431000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-431000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-431000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-431000: (2.481981364s)
--- PASS: TestKicCustomSubnet (24.69s)

                                                
                                    
x
+
TestKicStaticIP (25.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-697000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-697000 --static-ip=192.168.200.200: (22.551514906s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-697000 ip
helpers_test.go:175: Cleaning up "static-ip-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-697000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-697000: (2.447226351s)
--- PASS: TestKicStaticIP (25.22s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (52.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-400000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-400000 --driver=docker : (22.948594259s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-403000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-403000 --driver=docker : (23.017503121s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-400000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-403000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-403000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-403000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-403000: (2.523563681s)
helpers_test.go:175: Cleaning up "first-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-400000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-400000: (2.507250961s)
--- PASS: TestMinikubeProfile (52.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-307000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-307000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.720540132s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-307000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-321000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-321000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.845566653s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.85s)

                                                
                                    
x
+
TestPreload (159.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m12.60747852s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-083000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-083000 image pull gcr.io/k8s-minikube/busybox: (1.478340918s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-083000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-083000: (10.946667661s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m11.389468337s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-083000 image list
helpers_test.go:175: Cleaning up "test-preload-083000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-083000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-083000: (2.579146312s)
--- PASS: TestPreload (159.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17323
- KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current177113394/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current177113394/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current177113394/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current177113394/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.95s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.65s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17323
- KUBECONFIG=/Users/jenkins/minikube-integration/17323-48076/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1287993018/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1287993018/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1287993018/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1287993018/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.65s)

                                                
                                    

Test skip (17/181)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 15.703762ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-f9vs4" [58f9667e-506e-4eb3-9e40-29210088427c] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013830792s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-m6znt" [c8b13777-9fa6-41ee-bb62-2b17b1ff536d] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012260425s
addons_test.go:318: (dbg) Run:  kubectl --context addons-129000 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-129000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-129000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.00627868s)
addons_test.go:333: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (13.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-129000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-129000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-129000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [22780052-b96b-496e-b4e7-c4f9f1012122] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [22780052-b96b-496e-b4e7-c4f9f1012122] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.013841885s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p addons-129000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (13.16s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-165000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-165000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-7twvb" [f856d7b7-4836-4715-b0bb-aa053928f5f3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-7twvb" [f856d7b7-4836-4715-b0bb-aa053928f5f3] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.013189034s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard